“Chuff” in the context of rock climbing = bad, made an effort but didn’t get very far / fell a lot
“Chuff” in the context of rock climbing = bad, made an effort but didn’t get very far / fell a lot
Even faster – tailscale. For a cheeky way to play with your friends make a burner account with a shared login to get on the same tailnet for free. On the endpoints, turn off tailscale-ssh and any of their other “features” you don’t need.
I immediately thought this was salt. Maybe I’m the monster.
Second this ^
I have one and it’s fine, but not directly supported by OpenWRT. Looks like Beryl and Slate are though
Excellent notes. If I could add anything it would be on number 4 – just. add. imagery. For the love of your chosen deity, learn the shortcut for a screenshot on your OS. Use it like it’s astro glide and you’re trying to get a Cadillac into a dog house.
The little red circles or arrows you add in your chosen editing software will do more to convey a point than writing a paragraph on how to get to the right menu.
Believe what you will. I’m not an authority on the topic, but as a researcher in an adjacent field I have a pretty good idea. I also self host Ollama and SearXNG (a metasearch engine, to be clear, not a first party search engine) so I have some anecdotal inclinations.
Training even a teeny tiny LLM or ML model can run a typical gaming desktop at 100% for days. Sending a query to a pretrained model hardly even shows up on HTop unless it’s gigantic. Even the gigantic models only spike the CPU for a few seconds (until the query is complete). SearXNG, again anecdotally, spikes my PC about the same as Mistral in Ollama.
I would encourage you to look at more explanations like the one below. I’m not just blowing smoke, and I’m not dismissing the very real problem of massive training costs (in money, energy, and water) that you’re pointing out.
https://www.baeldung.com/cs/chatgpt-large-language-models-power-consumption
I don’t disagree, but it is useful to point out there are two truths in what you wrote.
The energy use of one person running an already trained model on their own hardware is trivial.
Even the energy use of many many people using already trained models (ChatGPT, etc) is still not the problem at hand (probably on the order of the energy usage from a typical search engine).
The energy use in training these models (the appendage measuring contest between tech giants pretending they’re on the cusp of AGI) is where the cost really ramps up.
Amazing! I’ve used that before but just to look for packages offline. I’ll definitely check that out.
Love the example here!
I’m still learning about available references (ex config.services.navidrome.settings.Port
). What resources did you find to be the best for learning that kind of thing?
I’ll accept RTFM if that’s applicable :)
Is there a reason you’re not considering running this in a VM?
I could see a case where you go for a native install on a virtual machine, attach a virtual disk to isolate your library from the rest of the filesystem, and then move that around (or just straight up mount that directory in the container) as needed.
That way you can back up your library separately from your JF server implementation and go hog wild.
Syntax-wise, it’s meant to be identical. I got on board when they were the only ones that enabled rootless (without admin privileges) mode. That’s no longer the case since rootless docker has been out for a while.
I’m personally a fan of the red hat docs and how-to’s on podman over the mixed bag of tech bro medium articles I associate with docker.
At the end of the day this is a bit of a Pokemon starter question. If your top priority is to get a reasonably common and straightforward job done just pick one and see where it takes you! :)
Syncthing is my answer though I appreciate it doesn’t get to the root of your question.
There are local backups that include your system settings, text messages, contacts, call history and (optionally) apps. The one thing I want is the ability to pick a directory for the local backup so I can make it work with syncthing without jumping through hoops.
It’s also compatible with Nextcloud and WebDAV if those are options for you.
I miss my pixel 5 :(
Wild how you happened to have this totally original idea days after this exact diagram structure was in a video posted by a channel with 3M subscribers :) crazy coincidence
My favorite line in the fireship video this is from goes something like “FreeBSD is the real answer but I like being able to Google things”
I was going to say Guix but I’ve always been a little Gentoo curious
There’s something to practicing with the operating system family that most big commercial outfits use. Plus SELinux is neat, and there’s no Canonical ads.
I use Fedora with home-manager, btw. After using Arch and Debian for years I really think Fedora (or adjacent like Nobara) is on its way to being the de facto starter distro.
Coarse Salt. Add just enough water to move it around + a little dish soap and shake. Works like a charm.