• 0 Posts
  • 70 Comments
Joined 22 days ago
cake
Cake day: September 14th, 2025

help-circle
  • The notice reads: “Want Coca-Cola Classic? It’s one glass only.

    “Based on new government laws, we’ve had to limit Coca-Cola Classic to one glass per customer.

    “Still thirsty? Help yourself to any of our low-sugar fizzy Bottomless Soft Drinks.”

    Under the new rules, any soft drinks that are low in sugar, for example ‘Zero’ alternative versions of most popular soft drink brands, can be drunk to one’s heart’s content.

    I imagine that manufacturers of artificial sweeteners are in for a good time.


  • I’m not familiar with Arch’s updating scheme, but I’d bet that it’s pretty similar to Red Hat’s and Debian’s. If you don’t complete an update, boot it up — even if it’s in a semi-broken state — and just start the update again. Even if the thing dies right in the middle of updating something boot-critical, so that it can’t boot, you can probably just use liveboot media, mount the drives in question, start a chrooted-to-your-regular-root-partition root shell, and restart the update.

    Doing that and installing or reinstalling packages is a pretty potent tool to fix a system. It’s not absolutely impossible that you can manage to hork a system up badly enough to render it still unusable in that situation — I once wiped ld.so from a system, for example, and had to grab another copy and manually put it in place to get stuff dynamically-linked stuff like the package manager working again. But that’ll deal with the great majority of problems you could create.




  • You can get inkjet printers that don’t have restrictions on the ink. They cost more, though.

    The reason printer manufacturers are so hell-bent on being a pain in the ass with the ink is because they’re using a razor-and-blades model. They’re selling you the printer at a lower price than they really should, if their price reflected their costs, with the expectation that they’ll make their money back when you buy ink at a higher price than you really should, because people pay more attention to the the initial price of the printer than to the consumable costs.

    Same way you can get unlocked cell phones instead of network-locked cell phones with a plan. Gaming PCs instead of consoles. It’s not that they’re unavailable, but you’re gonna have to accept a higher up-front cost, because you’re not getting a subsidy from the manufacturer.

    Canon sells a line of inkjet printers that just take ink from a bottle. No hassles with restrictions on ink supply there. The ink is cheap, and there are third-party options that are even cheaper readily available…but you’re going to pay full price for the printer.

    https://www.usa.canon.com/shop/printers/megatank-printers

    Their lowest-end “MegaTank” printer is $230:

    https://www.usa.canon.com/shop/p/megatank-pixma-g3290

    A pack of third-party ink refill bottles is $15, and will print (using Canon’s metrics), about 7,700 color pages and 9,000 black-and-white pages:

    https://www.amazon.com/Refill-Compatible-Bottles-MegaTank-4-Pack/dp/B0DSPSS5W7

    Compatible GI-21 Black Ink Bottle Up to 9,000 pages, GI-21 Cyan/Magenta/Yellow Ink Bottles Up to 7,700 pages

    On the other hand, Canon’s lowest-end “cartridge” printer, where they use the razor-and-blades model, is $55.

    https://www.usa.canon.com/shop/p/pixma-ts3720-wireless-home-all-in-one-printer

    But you rapidly pay for it with the ink; It looks like they presently sell a set of replacement cartridges for $91. And that set will print a tiny fraction of the number of pages that the above ink bottles will print.

    page yield of 400 Black / 400 Color pages per ink cartridge set and cost of $90.99 for a value pack of PG-285(XL) and CL-286(XL) ink cartridges (using Canon Online Store prices as of June 2025).

    So if you really do want to do photo prints with an inkjet without dealing with all the DRM-on-ink stuff, you can do it today. But…you’re going to pay more for the printer.

    All that being said, I do think that lasers are awfully nice in that you don’t need to deal with nozzles clogging. You can leave a laser printer for years and it’ll just work when you start it up. If you don’t need photo output, just less hassle.



  • I don’t know if there’s a term for them, but Bacula (and I think AMANDA might fall into this camp, but I haven’t looked at it in ages) are oriented more towards…“institutional” backup. Like, there’s a dedicated backup server, maybe dedicated offline media like tapes, the backup server needs to drive the backup, etc).

    There are some things that rsnapshot, rdiff-backup, duplicity, and so forth won’t do.

    • At least some of them (rdiff-backup, for one) won’t dedup files with different names. If a file is unchanged, it won’t use extra storage, but it won’t identify different identical files at different locations. This usually isn’t all that important for a single host, other than maybe if you rename files, but if you’re backing up many different hosts, as in an institutional setting, they likely files in common. They aren’t intended to back up multiple hosts to a single, shared repository.

    • Pull-only. I think that it might be possible to run some of the above three in “pull” mode, where the backup server connects and gets the backup, but where they don’t have the ability to write to the backup server. This may be desirable if you’re concerned about a host being compromised, but not the backup server, since it means that an attacker can’t go dick with your backups. Think of those cybercriminals who encrypt data at a company and wipe other copies and then demand a ransom for an unlock key. But the “institutional” backup systems are going to be aimed at having the backup server drive all this, and have the backup server have access to log into the individual hosts and pull the backups over.

    • Dedup for non-identical files. Note that restic can do this. While files might not be identical, they might share some common elements, and one might want to try to take advantage of that in backup storage.

    • rdiff-backup and rsnapshot don’t do encryption (though duplicity does). If one intends to use storage not under one’s physical control (e.g. “cloud backup”), this might be a concern.

    • No “full” backups. Some backup programs follow a scheme where one periodically does a backup that stores a full copy of the data, and then stores “incremental” backups from the last full backup. All rsnapshot, rdiff-backup, and duplicity are always-incremental, and are aimed at storing their backups on a single destination filesystem. A split between “full” and “incremental” is probably something you want if you’re using, say, tape storage and having backups that span multiple tapes, since it controls how many pieces of media you have to dig up to perform a restore.

    • I don’t know how Bacula or AMANDA handle it, if at all, but if you have a DBMS like PostgreSQL or MySQL or the like, it may be constantly receiving writes. This means that you can’t get an atomic snapshot of the database, which is critical if you want to be reliably backing up the storage. I don’t know what the convention is here, but I’d guess either using filesystem-level atomic snapshot support (e.g. btrfs) or requiring the backup system to be aware of the DBMS and instructing it to suspend modification while it does the backup. rsnapshot, rdiff-backup, and duplicity aren’t going to do anything like that.

    I’d agree that using the more-heavyweight, “institutional” backup programs can make sense for some use cases, like if you’re backing up many workstations or something.


  • Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.

    I think that you may be thinking of rsnapshot rather than rdiff-backup which has that behavior; both use rsync.

    But I’m not sure why you’d be concerned about this behavior.

    Are you worried about inode exhaustion on the destination filesystem?


  • slow

    rsync is pretty fast, frankly. Once it’s run once, if you have -a or -t passed, it’ll synchronize mtimes. If the modification time and filesize matches, by default, rsync won’t look at a file further, so subsequent runs will be pretty fast. You can’t really beat that for speed unless you have some sort of monitoring system in place (like, filesystem-level support for identifying modifications).



  • sed can do a bunch of things, but I overwhelmingly use it for a single operation in a pipeline: the s// operation. I think that that’s worth knowing.

    sed 's/foo/bar/'  
    

    will replace all the first text in each line matching the regex “foo” with “bar”.

    That’ll already handle a lot of cases, but a few other helpful sub-uses:

    sed 's/foo/bar/g'  
    

    will replace all text matching regex “foo” with “bar”, even if there are more than one per line

    sed 's/\([0-9a-f]*\)/0x\1/g  
    

    will take the text inside the backslash-escaped parens and put that matched text back in the replacement text, where one has ‘\1’. In the above example, that’s finding all hexadecimal strings and prefixing them with ‘0x’

    If you want to match a literal “/”, the easiest way to do it is to just use a different separator; if you use something other than a “/” as separator after the “s”, sed will expect that later in the expression too, like this:

    sed 's%/%SLASH%g  
    

    will replace all instances of a “/” in the text with “SLASH”.


  • I would generally argue that rsync is not a backup solution.

    Yeah, if you want to use rsync specifically for backups, you’re probably better-off using something like rdiff-backup, which makes use of rsync to generate backups and store them efficiently, and drive it from something like backupninja, which will run the task periodically and notify you if it fails.

    rsync: one-way synchronization

    unison: bidirectional synchronization

    git: synchronization of text files with good interactive merging.

    rdiff-backup: rsync-based backups. I used to use this and moved to restic, as the backupninja target for rdiff-backup has kind of fallen into disrepair.

    That doesn’t mean “don’t use rsync”. I mean, rsync’s a fine tool. It’s just…not really a backup program on its own.



  • tal@olio.cafetoSelfhosted@lemmy.worldhow do I find process that leads to oom?
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    4 days ago

    OOMs happen because your system is out of memory.

    You asked how to know which process is responsible. There is no correct answer to which process is “wrong” in using more memory — all one can say is that processes are in aggregate asking for too much memory. The kernel tries to “blame” a process and will kill it, as you’ve seen, to let your system continue to function, but ultimately, you may know better than it which is acting in a way you don’t want.

    It should log something to the kernel log when it OOM kills something.

    It may be that you simply don’t have enough memory to do what you want to do. You could take a glance in top (sort by memory usage with shift-M). You might be able to get by by adding more paging (swap) space. You can do this with a paging file if it’s problematic to create a paging partition.

    EDIT: I don’t know if there’s a way to get a dump of processes that are using memory at exactly the instant of the OOM, but if you want to get an idea of what memory usage looks at at that time, you can certainly do something like leave a top -o %MEM -b >log.txt process running to get a snapshot every two seconds of process memory use. top will print a timestamp at the top of each entry, and between the timestamped OOM entry in the kernel log and the timestamped dump, you should be able to look at what’s using memory.

    There are also various other packages for logging resource usage that provide less information, but also don’t use so much space, if you want to view historical resource usage. sysstat is what I usually use, with the sar command to view logged data, though that’s very elderly. Things like that won’t dump a list of all processes, but they will let you know if, over a given period of time, a server is running low on available memory.


  • tal@olio.cafetoTechnology@lemmy.world60hz Displays are a slideshow
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    I’m the other way. I’d rather have battery life on cell phones, and turn the refresh rate down.

    On a desktop, where the power usage is basically irrelevant, then sure, I’ll crank the refresh rate way up. One of the most-immediately-noticeable things is the mouse pointer, and that doesn’t exist on touch interfaces.




  • Hmm.

    So for some software, you can just increase the price.

    But…I wonder what that will do to the cost of video games. Typically, those are closer to one-off releases, not packages where new releases exist and are regularly purchased or subscriptions are in place.

    I’d expect this to increase the cost of maintenance, if there are legal obligations on publishers to monitor, notify, and deploy security fixes for their software and upstream. You’d think that it might encourage vendors to EOL software sooner; pull it off Steam or the like, mark as no longer supported.

    Maybe there are some exemptions somewhere that affect those.


  • tal@olio.cafetolinuxmemes@lemmy.worldWe have POSIX at home
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    6 days ago

    What’s the big deal with POSIX? Why are ppl constantly discussing what is and isn’t posix compliant?

    The short version: it’s a least-common-denominator standard that spans multiple Unix and Unix-like systems, so if you write to it, your software can fairly-trivially run on various systems.

    https://en.wikipedia.org/wiki/POSIX

    Windows has some level of Microsoft-provided Posix support, which is what the post is alluding to. I am fairly confident that it doesn’t have full Posix compliance. Cygwin, a separate, non-Microsoft, open-source effort, might qualify.

    kagis

    Okay, apparently it does confirm to a portion of the Posix standard:

    https://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem

    The subsystem only implements the POSIX.1 standard – also known as IEEE Std 1003.1-1990 or ISO/IEC 9945-1:1990 – primarily covering the kernel and C library programming interfaces which allowed a program written for other POSIX.1-compliant operating systems to be compiled and run under Windows NT. The Windows NT POSIX subsystem did not provide the interactive user environment parts of POSIX, originally standardized as POSIX.2. That is, Windows NT did not provide a POSIX shell nor any Unix commands out of the box, except for pax. The NT POSIX subsystem also did not provide any of the POSIX extensions that postdated the creation of Windows NT 3.1, such as those for POSIX Threads or POSIX IPC.