Yeah, it seems the sensor costs as much as a decent used camera.
Yeah, it seems the sensor costs as much as a decent used camera.
The earlier parts of this lecture by Irving Finkel talk about what happened when they first translated the more original flood story from stone tablets in 1872. And the rest of the lecture is a nice story about an adventure, so I can only recommend watching the whole thing.
I was wondering if your tool was displaying cache as usage, but I guess not. Not sure what you have running that’s consuming that much.
I mentioned this in another comment, but I’m currently running a simulation of a whole proxmox cluster with nodes, storage servers, switches and even a windows client machine active. I’m running that all on gnome with Firefox and discord open and this is my usage
$ free -h
total used free shared buff/cache available
Mem: 46Gi 16Gi 9.1Gi 168Mi 22Gi 30Gi
Swap: 3.8Gi 0B 3.8Gi
Of course discord is inside Firefox, so that helps, but still…
What does free -h
say?
About 6 months ago I upgraded my desktop from 16 to 48 gigs cause there were a few times I felt like I needed a bigger tmpfs.
Anyway, the other day I set up a simulation of this cluster I’m configuring, just kept piling up virtual machines without looking cause I knew I had all the ram I could need for them. Eventually I got curious and checked my usage, I had just only reached 16 gigs.
I think basically the only time I use more that the 16 gigs I had is when I fire up my GPU passthrough windows VM that I use for games, which isn’t your typical usage.
I remember people being upset by the ribbon back when office 2007 was released. Their complaints made sense until I sat down and used it. Found it to be a great improvement. I switched my libre office to the ribbon layout as soon as they added it. Because I don’t use it often, it’s great for finding stuff compared to looking through the menus.
The nice thing about the LO implementation is also that they added a couple of varieties of the design, like the compact one which pushes things closer together so it’s not distracting.
Yeah it’s the equivalent of finding two dollars on the ground and getting excited because at this rate you’ll be a billionaire soon enough. There’s less than 2g of plastic in an SD card - the buttons on your shirt probably weigh more.
Quite literally the first paragraph of the article:
According to Soviet records 381,067 German Wehrmacht POWs died in NKVD camps (356,700 German nationals and 24,367 from other nations).
Or in more detail lower down in the section titled Soviet statistics:
According to Russian historian Grigori F. Krivosheev, Soviet NKVD figures list 2,733,739 German “Wehrmacht” POWs (Военнопленные из войск вермахта) taken with 381,067 having died in captivity.
I’m not sure which German you expect to have conducted this research other than a former Nazi, seeing as basically everyone left was a former Nazi. And I don’t see how you can just dismiss the government report as Nazi lies when even the Soviets report a figure of 350 thousand dead.
Games are already horifically inefficient
That’s so far from the truth, it hurts me to read it. Games are one of the most optimised programs you can run on your computer. Just think about it, it’s a application rendering an entire imaginary world every dozen milliseconds. Compare it to anything else you run, like say slack or teams, which makes your CPU sweat just to notify you about a new message.
With 30% ownership it could have been at the forefront of generative AI, which OpenAI released to the world in 2022.
Do they think openai invented the concept of generative ai, because that’s what their statement implies?
It’s not that uncharacteristic. Mono is a fully open source project they didn’t create, didn’t really work on, and one they can’t extract any value from. So this is basically a gesture that doesn’t cost them anything, but at the same time it doesn’t do much except generate a headline.
Khtml was licensed as LGPL.
Some editors can embed neovim, for example: vscode-neovim. Not sure how well that works though as I never tried it.
Well personally if a package is not on aur I first check if there’s an appimage available, or if there’s a flatpak. If neither exist, I generally make a package for myself.
It sounds intimidating, but for most software the package description is just gonna be a single file of maybe 10-15 lines. It’s a useful skill to learn and there’s lots of tutorials explaining how to get into it, as well as the arch wiki serving as documentation. Not to mention, every aur or arch package can be looked at as an example, just click the “view PKGBUILD” link on the side on the package view. You can even simply download an existing package with git clone and just change some bits.
Alternatively you can just make it locally and use it like that, i.e. just run make without install.
Aur and pacman are 90% of why I use arch.
Also fyi to OP: never install software system-wide without your package manager. No sudo make install
, no curl .. | sudo bash
or whatever the readme calls for. Not because it’s unsafe, but because eventually you’re likely to end up with a broken system, and then you’ll blame your distro for it, or just Linux in general.
My desktop install is about a decade old now, and never broke because I only ever use the package manager.
Of course in your home folder anything goes.
I think they meant you don’t know what the binary is called because it doesn’t match the package name. I usually list the package files to see what it put in /use/bin
in such cases.
But check that it has all the features you need because it lags behind gitea in some aspects (like ci).
Podman not because of security but because of quadlets (systemd integration). Makes setting up and managing container services a breeze.