• 1 Post
  • 467 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle

  • I recommend Librewolf, it’s a lot more privacy-aggressive out of the box, and you can turn that down a little bit if you need, but otherwise it’s just a more trustworthy Firefox fork as far as I’m concerned. It supports Firefox sync as well (which is telling, because Librewolf takes privacy very seriously and isn’t going to provide too many easy opportunities for you to completely compromise it) Like the other person said sync is E2EE and the hosting server has zero-knowledge of any of your unencrypted data. If Librewolf trusts it, I trust it, and I think you can rest assured that with Librewolf, it’s probably never going to be sabotaged either, which as you imply, is not necessarily true with Firefox.

    I don’t recall whether they use Firefox’s sync server directly or if they have their own, but either way, like I said, the server has no knowledge of or access to your unencrypted data.


  • I’m not a super-expert but I suspect it’s probably still holding open the stdin and stdout file descriptors of the parent process. Try using &> /dev/null to throw them away and see if that helps. You could also try adding nohup in front of the npx, which does some weird re-parenting jazz to prevent the child process (npx) from actually being attached to the parent process so that it doesn’t get auto-closed when the parent exits, which is kind of the opposite of your problem, but it might also help in this case.

    Another possible option is using systemd-run --user <command> which effectively should make it into sytemd’s problem


  • She’s probably right honestly. When in rome, do as the romans do, even if rome is burning and all they’re doing is standing around playing music while it does.

    OK, so, I’m warning you, the rest of this comment is going to descend into darkly cynical, the sky is falling, the world is ending pessimistic ranting. Take it or leave it, I don’t care, I need to vent how I feel about the shitty fucking state of the world for the working class right now.

    So listen, AI is a technology designed by companies like Google and Apple that despite allegedly selling technology and software have in reality become gigantic advertising and marketing companies. They designed these AI for many purposes but chief among them is that they want it to do the most important thing any advertising and marketing can do which is convince you of things whether they are true or false, and the first thing they want to do is convince you AI itself is useful whether that is true or false.

    And the kicker is, not only did it do a spectacularly good job at convincing people of that already, it IS actually useful… at exactly what they designed it to do, which is convince people of things whether they are true or false. AI is genuinely fucking great at it. Often too good.

    If you want to use AI to convince people of things, fucking go for it. Making a resume is all about selling yourself and convincing people you’re worth hiring. Most companies are already using AI for this on their own side, and that should tell you something significant right there, they don’t give a shit about you. But don’t worry about that, AI is good at convincing itself of things too, and once you’re through the AI filters, at least some of the feeble meatbag HR brains you will be attacking with it will stand no more chance than they do against the tech giants. And just like they do, it enables you to use quantity over quality. Just fucking AI spam you sloppy resume everywhere you can. Spam dozens of copies of it, who gives a shit, companies don’t give a shit, they’ve told us they’re all “AI first” anyway. Good sensible companies are so few and far between, and usually they don’t even pay as well as the fucking idiot garbage fire companies. For every competent HR person at a good sensible company who quite rightfully rejects your shitty resume, there’s some garbage fire who can be suckered into assigning your warm body to some useless task if you just happen to bullseye right through the eye of their needle, and you just brute force that shit by throwing so many resumes at the problem that one of them is bound to go in through sheer statistics alone. And then next time you (or AI) writes your resume, you’ll have “experience” to put on it, and being such a shitbag makes you feel bad, maybe with a bit more experience you’ll eventually find your way to a good sensible company eventually (provided you find the idea of not having to be such a shitbag appealing and willing to take the pay cut that non-shitbags are required to suffer)

    If you want to use AI to get actual work done, or learn something real, you’re playing russian roulette and probably wasting your time with AI in the long run. It may accidentally be useful from time to time, and when it’s the company’s time you’re wasting who gives a shit (they clearly don’t) but it’s going to struggle to keep up with real needs. But for convincing people of things? It’s really pretty remarkable how good it is. Might as well take advantage of it. Use the tool for the job it’s designed for. A screwdriver’s a good tool for putting screws into things, but it’s not a very good hammer and it’s a fucking awful table saw.



  • It’s very unlikely you are infected by anything unless you were using some crazy settings or addons, or unless you were hit by some extreme 0-day exploit that hasn’t become widespread yet. Firefox does not and normally cannot execute files it downloads automatically nor are videos a likely risk for remote code execution now that we have technologies like data execution prevention built into processors, if you’re attacked by malware it will rely on some other vector or trickery to get you to execute the file. I would expect that your performance issues are unrelated, but you should also check Firefox’s addons and extensions as well as your task manager startup tab to make sure nothing has obviously been installed without your knowledge.

    One thing that sticks out at me is the fact that you only mention the file’s “title” and if you haven’t already you should make sure Windows Explorer is set up to ALWAYS show full file extensions, that’s like a basic safety measure that really should be on by default but isn’t, and it’s really mandatory if you’re messing around on the darker parts of the web. You have to know what kind of file extension it is because that affects what Windows is going to do with it, and when it’s supposed to be one thing and Windows is going to do something different with it that’s a huge red flag that it’s malware trying to trick you into running it.

    You can upload the file to virustotal if you want to scan it but it doesn’t sound likely that it even ran unless you did something bad by accident.



  • The first problem would be the height of the intervening terrain and even if you could overcome that, you still have to contend with friction inside the pipe which is a factor most people don’t think about for short distances but when you start trying to carry water long distances through a pipe, friction becomes massive. An ideal siphon inside an ideal pipe is simply a question of height between source and destination. However in the real world, a siphon isn’t unlimited or ideal. There is a height it can’t overcome and it’s actually not very high at all, geographically speaking. The maximum height of a siphon is only around 10 meters. The terrain between the Red Sea and the Dead Sea is pretty flat, but it’s probably not that flat. I’m not going to pretend I’ve done a precise survey of potential routes, but I’d expect there’s probably some bumps in elevation along the way that’s realistically going to need say, 100 meters of lift to overcome. But even 11 meters would simply end the conversation. There’s simply no way around that for a siphon.

    The reason for this height limitation has to do with the atmospheric pressure required to keep the water liquid, because once it no longer has enough pressure on it to keep it liquid, it simply vaporizes before it reaches the height it needs to and the siphon is broken before it even starts. In a vacuum, at standard temperature, water instantly vaporizes. The external atmospheric pressure (which is acting on the entire water column up to its highest point, to get it over the hill) is all that is keeping the water in its liquid form inside the siphon. The higher you go, the more work that external pressure is doing, and eventually the weight of the water column exceeds the pressure at the bottom of the water column and again, the siphon breaks.

    The friction is the other problem. Even if you could limit your route to no more than 10 meters above the Red Sea, you’re also asking the siphon to not only lift it to that height, but also carry that water through 200 kilometers of pipe or more. We don’t think of pipes as having friction, but they do, and it’s very significant at those distances, especially when your power source (gravity, in this case) is already operating near its absolute limits due to the height problem we already discussed. What you hoped would be a gusher of a siphon will end up being a trickle, if anything at all, with most of the water just sitting idle in the pipe to maintain the siphon while a little dribbles its way slowly through to the destination.

    Finally you’ve got all kinds of other more obscure effects at play at those scales, like water’s surface tension, variability of flow rates, possible pinhole leaks in the pipe that will introduce air, offgassing of dissolved gases in the water or even from the pipe itself, and temperature gradients inside the pipe. All of these are going to play havoc with the ability to form and sustain a reliable siphon.

    In short, siphons are actually pretty limited, we don’t see much of those limitations on the small scale, but on the larger scale of this project those limitations become very serious, very quickly and basically remove the possibility of using a siphon for any realistic practical water relocation project. Almost all of those go away very quickly when you pressurize the system with a pump instead of relying on atmospheric pressure alone. It’s a fun thought experiment, but in practice a simple electric pump turns out to be a pretty cheap way to solve a lot of otherwise really complex hydrodynamic problems, and when that’s the case, it’s not really worth teasing out a solution to those problems with all kinds of complicated engineering. Just throw a pump at the problem and call it a day, job done.


  • You may not agree with what the military does, but you have to respect them for that reason alone, above all else.

    This premise must be rejected. You do NOT have to respect them for that reason alone, and certainly not above all else.

    Did Nazi soldiers deserve respect, because they were just following orders and they followed those orders and what options did they really have? Were they not also facing the potential of harsh punishment if they did not?

    Not having good alternative options is not an excuse for following orders you know are wrong. Respect is earned when your morality supersedes your orders, despite the potential (and sometimes very real and significant) punishments. Your intentions only get you so far, eventually you need to act or else any remaining respect for you will be gone.




  • Looks really nice and seems like it should be a great foundation for future development. Personally I can’t lose Nextcloud until there are sufficiently featureful and reliable clients for Linux, Windows, Android that synchronize a local copy and help manage the inevitable file deconfliction (Nextcloud Desktop only barely qualifies at this, but it does technically qualify and that represents the minimum viable product for me). I’m not sure a WebDav client alone is enough to satisfy this criteria, but I am not going to pretend I am actually familiar with any WebDav clients so maybe they already exist.


  • You’re on the right track. Like everything else in self-hosting you will learn and develop new strategies and scale things up to an appropriate level as you go and as your homelab grows. I think the key is to start with something immediately achievable, and iterate fast, aiming for continuous improvement.

    My first idea was much like yours, very traditional documentation, with words, in a document. I quickly found the same thing you did, it’s half-baked and insufficient. There’s simply no way to make make it match the actual state of the system perfectly and it is simply inadequate to use English alone to explain what I did because that ends up being too vague to be useful in a technical sense.

    My next realization was that in most cases what I really wanted was to be able to know every single command I had ever run, basically without exception. So I started documenting that instead of focusing on the wording and the explanations. Then I started to feel like I wasn’t capturing every command reliably because I would get distracted trying to figure out a problem and forget to, and it was duplication of effort to copy and paste commands from the console to the document or vice versa. That turned into the idea of collecting bunches of commands together into a script, that I could potentially just run, which would at least reduce the risk of gaps and missing steps. Then I could put the commands I wanted to run right into the script, run the script, and then save it for posterity, knowing I’d accurately captured both the commands I ran and the changes I made to get it working by keeping it in version control.

    But upon attempting to do so, I found that just a bunch of long lists of commands on their own isn’t terribly useful so I started to group all the lists up, attempting to find commonalities by things like server or service, and then starting organize them better into scripts for different roles and intents that I could apply to any server or service, and over time this started to develop into quite a library of scripts. As I was doing this organizing I realized that as long as I made sure the script was functionally idempotent (doesn’t change behaviors or duplicate work when run repeatedly, it’s an important concept) I can guarantee that all my commands are properly documented and also that they have all been run – and if they haven’t, or I’m not sure, I can just run the script again as it’s supposed to always be safe to re-run no matter what state the system is in. So I started moving more and more to this strategy, until I realized that if I just organized this well enough, and made the scripts run automatically when they are changed or updated, I could not only improve my guarantees of having all these commands reliably run, but also quickly run them on many different servers and services all at once without even having to think about it.

    There are some downsides of course, this leaves the potential of bugs in the scripts that make it not idempotent or not safe to re-run, and the only thing I can do is try to make sure they don’t happen, and if they do, identify and fix these bugs when they happen. The next step is probably to have some kind of testing process and environment (preferably automated) but now I’m really getting into the weeds. But at least I don’t really have any concerns that my system is undocumented anymore. I can quickly reference almost anything it’s doing or how it’s set up. That said, one other risk is that the system of scripts and automation becomes so complex that they start being too complex to quickly untangle, and at that point I’ll need better documentation for them. And ultimately you get into a circle of how do you validate the things your scripts are doing are actually working and doing what you expect them to do and that nothing is being missed, and usually you run back into the same ideas that doomed your documentation from the start, consistency and accuracy.

    It also opens an attack vector, where somebody gaining access to these scripts not only gains all the most detailed knowledge of how your system is configured but also the potential to inject commands into those scripts and run them anywhere, so you have to make sure to treat these scripts and systems like the crown jewels they are. If they are compromised, you are in serious trouble.

    By now I have of course realized (and you all probably have too) that I have independently re-invented infrastructure-as-code. There are tools and systems (ansible and terraform come to mind) to help you do this, and at some point I may decide to take advantage of them but personally I’m not there yet. Maybe soon. If you want to skip the intermediate steps I did, you might even be able to skip directly to that approach. But personally I think there is value in the process, it helps defining your needs and building your understanding that there really isn’t anything magical going on behind the scenes and that may help prevent these tools from turning into a black box which isn’t actually going to help you understand your system.

    Do I have a perfect system? Of course not. In a lot of ways it’s probably horrific and I’m sure there are more experienced professionals out there cringing or perhaps already furiously warming up their keyboards. But I learned a lot, understand a lot more than I did when I started, and you can too. Maybe you’ll follow the same path I did, maybe you won’t. But you’ll get there.


  • Nextcloud is just really slow. It is what it is, I don’t use it for any things that are huge, numerous, or need speed. For that I use SyncThing or something even more specialized depending on what exactly I’m trying to do.

    Nextcloud is just my easy and convenient little dropbox, and I treat it like it’s an oldschool free dropbox with limited space that’s going to nag me to upgrade if I put too much stuff in it. It won’t nag me to upgrade, but it will get slow. So I just don’t stress it out. So I only use it to store little convenience things that I want easy access to on all my machines without any fuss. For documents and “home directory” and syncing my calendars and stuff like that it’s great and serves the purpose.

    I haven’t used Seafile. Features sound good, minus the AI buzzword soup, but it looks a little too corporate-enterprisey for me, with minimal commitment to open source and no actual link to anything open source on their website, I don’t doubt that it exists, somewhere, but that raises red flags for potential future (if not in-progress) enshittification to me. After eventually finding their github repo (with no help from them) I finally found a link to build instructions and… it’s a broken link. They don’t seem to actually be looking for contributions or they’re just going through the motions. Open source “community” is clearly not the target audience for their “community edition”, not really.

    I’ll stick to SyncThing.


  • Sounds like you’re doing fine to me. The stakes are indeed higher, but that is because what you’re doing is important.

    As the Bene Gesserit teaches: I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear.

    Make your best effort at security and backups, use your fears to inform a sober assessment of the risks and pitfalls, and ask for help when you need to, but don’t let it stop you from accomplishing what you want to. The self-hosting must flow.



  • I agree with you. I think many baskets is better than guessing that you’re exclusively picking the right basket. The key question is how you (and from here on I am referring to “you, as a creator” not “you, personally”) are allocating your effort across the many baskets. Even acknowledging that there are many baskets and options is a necessary starting point. If you are treating the options with lower population and lower views as second class citizens and just throwing your content up there too without any additional thought or attention that’s fair, given where they are right now, and at least it’s a step in the right direction, but you need to start thinking about the next step too. If you look at Peertube and see a waste of your time that has no future, you’re entitled to your opinion but I’d respectfully disagree. I think it pretty clearly is the future or at least a step towards it. If you think we’ll simply never escape Youtube, then by all means bend the knee to them and don’t waste your time anywhere else.

    But remember that Youtube was new and disruptive once too and people said it could never succeed at what it was trying to do. And now that it’s succeeded, we think it could never fail, it’s too big to fail. Things don’t succeed until they do. Things don’t fail until they do. It doesn’t happen overnight, it happens gradually but if you realize things are shifting early, and spend your effort wisely in the places where your efforts will become most valuable, you’ll be ahead of the curve and in a really good position to maximize the benefit. Or by the time you realize it’s happening, you’re already falling behind and you’ll be scrambling to make the transition. And if you’re completely wrong and the alternative just quietly dies as they sometimes do, your effort is wasted but you’re otherwise not really any worse off than you already are.

    Is any alternative video platform worth investing your time and effort in? Not based on what any of them are today, no. But based on what they will be? I think so. You have to think so too, if you want them to succeed. Will they succeed? Can Youtube ever fail? Only time, and you, each and every humble individual content creator, will decide.

    Be the change you want to see in the world, don’t wait for it for it to happen to you.