You’ll have to look into GTK’s Layer Shell implementation.
Look at the source of Eww. It’s written in Rust, it uses GTK (or GDK?), and it has a config option that opens the windows in the bottom layer.
I take my shitposts very seriously.
You’ll have to look into GTK’s Layer Shell implementation.
Look at the source of Eww. It’s written in Rust, it uses GTK (or GDK?), and it has a config option that opens the windows in the bottom layer.


I think you can get some kind of exemption for archival purposes. I know that the Internet Archive has one. But I also know that ultimately Microsoft is responsible for the data hosted on Github, and Microsoft’s interest is to not even risk getting sued.
Such as?
That tells me you don’t understand what a “stable” release branch is. The Debian maintainers do a lot of work to ensure that the packages not only work, but work well together. They don’t introduce breaking changes during the lifecycle of a major branch. They add feature updates between point releases, and continuously release security updates.
In the real world, that stability is a great value, especially in the server space. You’d be insane to use Arch as a production server, and I’m saying that as an Arch user.
Something, something, sword of Damocles.


At work, we use PiSignage for a large overhead screen. It’s based on Debian and uses a fullscreen Firefox running in the labwc compositor. The developer advertises a management server (cloud or self-hosted) to manage multiple connected devices, but it’s completely optional (superfluous in my opinion) and the standalone web UI is perfectly usable.


This is something the people of !Selfhosted@lemmy.world are better suited to answer.
In my personal opinion, for most home servers, the double redundancy of RAID 6 is more valuable than having a fully rebuilt array as soon as possible. If one member of a RAID 6 array fails, you’re still at an effective redundancy of a RAID 5. If one member of a RAID 5 array fails, you have zero redundancy until the hot spare is rebuilt.
If energy consumption is also a factor, it’s worth keeping in mind that a hot spare can be powered down by the controller until it is needed.
I’d personally go with RAID 6: 4 data + 2 distributed parity. That was the plan for my server too, but the motherboard only has four SATA ports and one had to be dedicated to the OS SSD.


My predecessor at work had a “backup scheme” where each week a full copy of important VMs’ virtual disks would be pulled by a backup VM. Two issues with that. One, the VMs were not powered off and nothing ensured that the disks were synced. Two, the backups were made onto the same physical host with no replication or high availability beyond RAID 1.


Benefit of my job: I get access to the scrap pile. I don’t know any reputable used/refurbished sellers.


You can absolutely use it without a reverse proxy. A proxy is just another fancy HTTP client that contacts the server on the original client’s behalf and forwards the response back to it, usually wrapped in HTTPS. A man in the middle that you trust.
All you have to do is expose the desired port(s) to all addresses:
# ...
- ports:
- 8080:8080
…and obviously to set the URL environment variables to localhost or whatever address the server uses.


I know what it is, and I ensure compliance at work (I’m a sysadmin). At home, it’s less about best practices and more about what hardware I can afford. Manufacturers tend not to offer regional discounts. A 2-2-0 scheme is better than nothing at all.


No backups. For important documents and photos, that was the backup and most of them have copies on my PC. The rest is easily replaced. I knew what I was getting into, and that the free, decommissioned hard drives with 20-30 thousand hours on them were a lit fuse.


I don’t know which feature you mean, can you link the documentation?


I used it for a while, and it’s a decent solution. Similar to Tailscale’s subnet router, but it always uses a relay and doesn’t do all the UDP black magic. I think it uses TCP to create the tunnel, which might introduce some network latency compared to Tailscale or bare Wireguard.


Right. I spent the last several hours trying to get a mixed batch of Win10, Win11, and Win10-upgraded-from-8 computers to talk to a printer and had just about enough of this argument. If you want a pissing match of who can be the biggest dick, take it to Twitter.
Locked.
Locking. The comment section is a perfect summary of why so many people don’t want to be associated with Linux users. I should’ve removed the post outright because it is inflammatory, reactionary, and invites toxicity – evidenced by the fact that the downvotes on dissenting comments are largely made by the same users. I wonder if a pattern might emerge.
There is a discussion to be had about the topic… but it went to exchanging insults and downvoting out of disagreement.
You must think you’re clever. Take a few days to think about what separates you (specifically) from the toxic “PC Master Race” evangelists, and maybe fornicate some greenery.
Chill it with the insults.
It definitely depends on your home instance. You’re on LBZ, and Ada is a helicopter parent who blocks, bans, and purges anything and anyone that may be upsetting to her children. So yes, you’re probably not exposed to the full picture on Lemmy.
Visit the comment threads on Phoronix and you’ll see WOKE and FASCIST and COMMUNIST and TANKIE and any number of insults thrown around like manure in a monkey cage. Or try to argue in favour of systemd in a high visibility thread and inevitably someone will say that it’s bloat, that it’s corporate trash, and recite “enshittiication” like it’s some Pavlovian reflex.
X11 was released in 1987. The original X Window System was released in 1984. That is not just a few years of difference.
If you meant the X.org implementation, then compare it to compositors, not to the protocol.