Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ssh per system #357

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Ssh per system #357

wants to merge 2 commits into from

Conversation

mlusetti
Copy link

Generate a key (SSH) per system while creating the new one.
Use the system specific key while creating the ssh client.

While here remove the collection id when referencing systems table and use the name instead.

There's no need to specify the collection id
we could use the name, more readable
Generate and store the KEY within db at each
system creation
@henrygd
Copy link
Owner

henrygd commented Dec 31, 2024

Thanks!

Please give me a few days to check this out. I've been visiting family over the holidays and have a lot to catch up on.

@mlusetti
Copy link
Author

mlusetti commented Jan 1, 2025

No worries, take your time with the family which is more important than this!
Have a nice time!

@dabell-cc
Copy link

What is the benefit to a new key for each system?
If the private keys on the hub aren't encrypted with passphrase protection (not that I can see) it seems the same level of security.
If an agent is compromised the public key is typically considered safe to make public - so what's the concern?

The only example I can think up is if a single private key is leaked, then it's less of a hassle to replace on only one agent, rather than many. But again, a collection of unencrypted private keys appear to be the same level of leak risk to a single key.

Would someone mind explaining if I have misunderstood the situation?

@mlusetti
Copy link
Author

mlusetti commented Jan 3, 2025

Would someone mind explaining if I have misunderstood the situation?

Well you hit the bull eye, almost, it's always a matter of trade offs.

I agree on having password protected keys is better then not having, but when you've to use these protected keys in software which is running unattendend you've to provide the password/passphrase anyway.
Again I agree there are ways to do that more securely but... Is an escalation of maintenance burden for this type of software (again a trade off).
One could also store the key in the DB as encrypted values, and maybe provide the key at build time, but again we are far beyond the point of safety for this kind of software, at least to me.
Another option could be to build a main process which is run at a higher privilege which reads the priv key then "fork" a subprocess at a lower privilege which hasn't the right to read it and which in turns handle all the connections.

As a threat model for this specific software I considered the host running the hub secure by definition, so having the key there and only there is enough to me.

I submitted this PR cause I've a very very similar system build on PB and SSH to control Restic backups of a server fleet.
Each restic operation is controlled on a host which fire them at cron intervals and since I've used an SSH per host I thought it could be helpful, especially if this system will be extend to support remote commands too.

In conclusion you're are right, there's no particular gain (apart what I've said before) in having a single key per remote host and there's no particular issue in the current way of handling ssh connections.

@henrygd
Copy link
Owner

henrygd commented Jan 4, 2025

I appreciate the exchange of thoughts, both of you make good points.

I think this approach makes more sense for a multi-user setup. Currently the same key is shared among all users, so if user A knows that user B added an agent to a particular host, then user A can add the same host and receive data from user B's agent.

Of course you shouldn't allow untrusted users to begin with, but it would be nice to have a safeguard. Particularly if we add features down the line like web SSH or container actions. That would be opt-in, and we'd need an additional auth mechanism anyway, but better not to share keys IMO.

I should be able to get back on Beszel this week, but may need to do a few things for a patch release first. So let me get back to you about implementation details and we'll shoot for including this in the next minor release.

@henrygd
Copy link
Owner

henrygd commented Jan 26, 2025

Quick update so you know I didn't forget about this.

A downside of this change I didn't consider initially is that it would complicate cluster deployments. With different keys you'd need to configure each node separately rather than using a global config for all agents in the cluster.

I still think there's reason to do it, but I'd rather make it optional.

Please don't hate me, but I only have so much time to work on this project and there are higher priority issues that I need to take care of. There's no pressing security concern with the current functionality, so I'd like to leave this PR open and revisit it in a month or two. 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants