-
-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ssh per system #357
base: main
Are you sure you want to change the base?
Ssh per system #357
Conversation
There's no need to specify the collection id we could use the name, more readable
Generate and store the KEY within db at each system creation
Thanks! Please give me a few days to check this out. I've been visiting family over the holidays and have a lot to catch up on. |
No worries, take your time with the family which is more important than this! |
What is the benefit to a new key for each system? The only example I can think up is if a single private key is leaked, then it's less of a hassle to replace on only one agent, rather than many. But again, a collection of unencrypted private keys appear to be the same level of leak risk to a single key. Would someone mind explaining if I have misunderstood the situation? |
Well you hit the bull eye, almost, it's always a matter of trade offs. I agree on having password protected keys is better then not having, but when you've to use these protected keys in software which is running unattendend you've to provide the password/passphrase anyway. As a threat model for this specific software I considered the host running the hub secure by definition, so having the key there and only there is enough to me. I submitted this PR cause I've a very very similar system build on PB and SSH to control Restic backups of a server fleet. In conclusion you're are right, there's no particular gain (apart what I've said before) in having a single key per remote host and there's no particular issue in the current way of handling ssh connections. |
I appreciate the exchange of thoughts, both of you make good points. I think this approach makes more sense for a multi-user setup. Currently the same key is shared among all users, so if user A knows that user B added an agent to a particular host, then user A can add the same host and receive data from user B's agent. Of course you shouldn't allow untrusted users to begin with, but it would be nice to have a safeguard. Particularly if we add features down the line like web SSH or container actions. That would be opt-in, and we'd need an additional auth mechanism anyway, but better not to share keys IMO. I should be able to get back on Beszel this week, but may need to do a few things for a patch release first. So let me get back to you about implementation details and we'll shoot for including this in the next minor release. |
Quick update so you know I didn't forget about this. A downside of this change I didn't consider initially is that it would complicate cluster deployments. With different keys you'd need to configure each node separately rather than using a global config for all agents in the cluster. I still think there's reason to do it, but I'd rather make it optional. Please don't hate me, but I only have so much time to work on this project and there are higher priority issues that I need to take care of. There's no pressing security concern with the current functionality, so I'd like to leave this PR open and revisit it in a month or two. 🙏 |
Generate a key (SSH) per system while creating the new one.
Use the system specific key while creating the ssh client.
While here remove the collection id when referencing systems table and use the name instead.