

If you’re not indemnified, you might be found liable, but you’re not necessarily liable. It depends on the circumstances.
If you’re not indemnified, you might be found liable, but you’re not necessarily liable. It depends on the circumstances.
Headline is clickbait and is incorrect per the text of the article. It should read “Doctors not indemnified if AI transcriber mandated by NHS gets it wrong.”
You don’t have to finish the file to share it though, that’s a major part of bittorrent. Each peer shares parts of the files that they’ve partially downloaded already. So Meta didn’t need to finish and share the whole file to have technically shared some parts of copyrighted works. Unless they just had uploading completely disabled,
The argument was not that it didn’t matter if a user didn’t download the entirety of a work from Meta, but that it didn’t matter whether a user downloaded anything from Meta, regardless of whether Meta was a peer or seed at the time.
Theoretically, Meta could have disabled uploading but not blocked their client from signaling that they could upload. This would, according to that argument, still counts as reproducing the works, under the logic that signaling that it was available is the same as “making it available.”
but they still “reproduced” those works by vectorizing them into an LLM. If Gemini can reproduce a copyrighted work “from memory” then that still counts.
That’s irrelevant to the plaintiff’s argument. And beyond that, it would need to be proven on its own merits. This argument about torrenting wouldn’t be relevant if LLAMA were obviously a derivative creation that wasn’t subject to fair use protections.
It’s also irrelevant if Gemini can reproduce a work, as Meta did not create Gemini.
Does any Llama model reproduce the entirety of The Bedwetter by Sarah Silverman if you provide the first paragraph? Does it even get the first chapter? I highly doubt it.
By the same logic, almost any computer on the internet is guilty of copyright infringement. Proxy servers, VPNs, basically any compute that routed those packets temporarily had (or still has for caches, logs, etc) copies of that protected data.
There have been lawsuits against both ISPs and VPNs in recent years for being complicit in copyright infringement, but that’s a bit different. Generally speaking, there are laws, like the DMCA, that specifically limit the liability of network providers and network services, so long as they respect things like takedown notices.
I’d just like to interject for a moment. What you’re referring to as Alpine Linux Alpine Linux is in fact Pine’s fork, Alpine / Alpine Linux Pine Linux, or as I’ve taken to calling it, Pine’s Alpine plus Alpine Linux Pine Linux. Alpine Linux Pine Linux is an operating system unto itself, and Pine’s Alpine fork is another free component of a fully functioning Alpine Linux Pine Linux system.
The energy consumption of a single AI exchange is roughly on par with a single Google search back in 2009. Source. Was using Google search in 2009 unethical?
Most anti-car people are in favor of improving public transit options.
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.
16 GB of RAM, though? Is it even optimized for the Ryzen 9950X3D?
And a 4 TB SSD - not even necessarily NVME?
Doesn’t seem high powered to me.
Are you saying that NAT isn’t effectively a firewall or that a NAT firewall isn’t effectively a firewall?
Is there a way to use symlinks instead? I’d think it would be possible, even with Docker - it would just require the torrent directory to be mounted read-only in the same location in every Docker container that had symlinks to files on it.
Depending on setup this can be true with Jellyfin, too. I have a domain registered, use dynamic DNS, and have Traefik direct a subdomain to my Jellyfin server. My mobile clients are configured using that. My local clients use the local static IP.
If my internet goes down, my mobile clients can’t connect, even on the LAN.
On the other hand it is a conduit for censorship. If an admin doesn’t like what you post on another instance, then they can censor you everywhere.
Such a user can
Further, “Whether another user actually downloaded the content that Meta made available” through torrenting “is irrelevant,” the authors alleged. “Meta ‘reproduced’ the works as soon as it made them available to other peers.”
Is there existing case law for what making something “available” means? If I say “Alright, I’ll send you this book if you want, just ask,” have I made it available? What if, when someone asks, I don’t actually send them anything?
I’m thinking outside of contexts of piracy and torrenting, to be clear - like if a software license requires you to make any changed versions available to anyone who uses the software. Can you say it’s available if your distribution platform is configured to prevent downloads?
If not, then why would it be any different when torrenting?
Meta ‘reproduced’ the works as soon as it made them available to other peers.
The argument that a copyrighted work has been reproduced when “made available,” when “made available” has such a low bar is also perplexing. If I post an ad on Craigslist for the sale of the Mona Lisa, have I reproduced it?
What if it was for a car?
I’m selling a brand new 2026 Alfa Romeo 4E, DM me your offers. I’ve now “reproduced” a car - come at me, MPAA.
I downgraded Firefox once last year, but after the next major version a couple weeks later, I was able to upgrade again. Never had to downgrade it before that, though.
I forget why, though. I think it was a pretty niche issue.
Ah, I assumed there were some areas where Firefox had been found lacking relative to Chromium browsers.
For me, the current version of any major browser or fork with consistently applied security updates and capability of running the full version of Ublock Origin is sufficiently secure for my threat model. Given that, and that they all offer the feature set I want, wouldn’t it be reasonable to avoid Chromium browsers because I don’t want to encourage the Chromium monopoly?
That’s only a small fraction of why I use Firefox, to be fair, but suppose for argument’s sake that I don’t care about MV3 extensions, Firefox Containers, etc… Would be it be so wrong for not wanting there to be a Chromium monopoly to be why I chose Firefox or one of its forks?
Use the better, more secure technology
More secure according to what?
This is an interesting parallel, but I feel like I missed some key part of it.
In the US, at least, we historically killed off a lot of deer’s natural predators - mostly wolves - and as a result, the deer population can get out of control, causing serious problems to the ecosystem. Hunters help to remedy that. The relatively small violences that they perform on an individual basis add up to improving the overall ecosystem.
That isn’t the same as being a bigot, or a sexist, or a fascist… and I don’t know why anyone would assume that a person holds those views because they’re mean and petty. They hold those views for a variety of reasons - sometimes because they’re a child or barely an adult and that’s just what they learned, and they either don’t know any better or haven’t cared enough to think it through; sometimes because they’ve been conditioned to think that way; sometimes because they’re sociopaths who recognize that it’s easier to oppress that particular group.
It doesn’t really matter what their reason is. Either way, they’re a worse person because of it, and often they’re overall a bad person, regardless of the rest of their views, actions, and contributions.
Being a hunter, by contrast, is neutral leaning positive.
It makes sense that a rational person who loves being in nature, who loves animals, who wants their local ecosystem to be successful, would as a result want to help out in some small way, even if that means they have to kill an animal to do so. It doesn’t make sense that a rational person who loves all people, who wants their local communities to be successful, would as a result want to oppress and harm the people in already marginalized groups.
I don’t think equating being bigoted with holding unjustifiable opinions does it justice. The way we use the word opinion generally applies to things that are trivial or unimportant, that don’t ultimately matter, e.g., likes and dislikes. Being a bigot is a viewpoint; it shapes you. For many bigots, their entire perspective is warped and wrong. And there’s a common misunderstanding that you can’t argue with someone’s opinions; because it’s just how they “feel.” But being a bigot, whether you’re sexist, racist, transphobic, queerphobic, homophobic, biphobic, etc., is a belief, and it’s one that, in most cases, the bigot chooses (consciously or not) to keep believing.
If an adult with functioning cognitive abilities refuses to question their bigoted beliefs, then they’ve made a choice to be a bigot.