In that case it’s definetly worth it to try this out, just so you have one more notification to disable
In that case it’s definetly worth it to try this out, just so you have one more notification to disable
In that case hydroxide-push will work too, which is good news!
Just note that the IMAP, SMTP and CardDav functions have been stripped out from this push version. If there’s interest to have those too, a different version with the push stuff added on top of full Hydroxide could be made. It will require a bit of time to develop.
The scope of hydroxide-push is only push notifications for now.
I think it does require a paid account, Hydroxide basically acts like the official Proton bridge.
I haven’t actually tested with a free account, so there’s a chance it does work. When you run the auth
command (which is the same as upstream Hydroxide), it will probably throw an error.
If you have a free account and try this out (or Hydroxide), please report how it goes back here, I’ll add a note to the readme. Upstream doesn’t seem to mention this in their repo either.
Happy to report that version 0.2.28-push7, available today now supports HTTP Basic Authentication for the push topic!
Password for basic auth is stored base64-encoded in $HOME/.config/hydroxide/notify.json
, this is something that could be improved. Considering UnifiedPush always requires anonymous write access to the push topics, I don’t think this a very high-risk shortcoming.
Suggestions for better password handling, as well as general feedback are welcome!
There is no way to log in or do anything externally after the daemon has started.
The idea is just to provide push notifications, nothing else. The bridge creates a “login session”, because Hydroxide won’t poll for events if no users are logged in. In reality the SMTP or IMAP services are never started.
If there’s an oversight somewhere, I’m more than happy to admit it and see to fixing it. I wouldn’t run this on a cloud VPS, just like I wouldn’t run Hydroxide either. Because all connections are outbound and the amount of data is small, a Raspberry Pi at home should be more than enough.
I see you deleted the comment, going to leave this here anyway.
Hey @Andromxda@lemmy.dbzer0.com , didn’t notice you already posted this. Made a new post about this too. I guess the double posting is somewhat OK. Thanks for promoting!
I’m running Aurora DX on work and personal laptops. Also a gaming / media center box, which uses a custom ublue-silverblue based image that has ZFS modules installed (the box is also used for local homelab backups)
As long as you can get to the flatpak/container mindset, the atomic distros are absolutely brilliant.
I didn’t read all the comments, so someone may have pointed this out already.
One of the main ideas is probably something like Fedora CoreOS, where the Quadlet systemd files are automatically created during first boot with something like Kickstart or cloud-init.
Instead of shipping the applications with the image, the OS image can be very minimal, while still being able to run very complex stuff.
When you add the fact that CoreOS and other atomic distros can update themselves in the background, and boot to an updated base image, the box just needs periodic reboots and everything stays updated and running with basically no interaction from the admin at all, best case.
Probably not so useful in the self-hosting / homelab context, but I can imagine the appeal on a larger scale.
I’ve been using Quadlet+Podman kube YAMLs for a while for my own self-hosted services, and it’s pretty rock solid. Currently experimenting with k3s, but I think I’ll soon switch back. Kubernetes is nice, but it’s a lot more fragile for just a single node. And there’s way too much I don’t understand…
I wrote a couple blog posts about the homelab setup, planning to add more when I have time. Give a read if you’re interested: https://oranki.net/tags/self-hosting-my-way/