I noticed the lag but I just figured it was intentionally put there against Firefox. Still dont see ads so im happy with whatever.
It’s always:
fuck YouTube. Use peer tube
…till YouTube goes down. Then the truth crawls out the weeds.
Sorry, guys, my bad.
I was vibe coding “Make a user interface worse than YouTube” and the agent misunderstood and escaped it’s sandbox.
The joys of vibe coding. As long as this won’t cost them a ton of money, they will keep vibe coding. Just look at the state of windows
It’s all intentional to get you to be so frustrated with your current browser and switch to Google Chrome.
All of this for their UI to look the worst it ever has. Brilliant.
I was having massive slowdown, cleared my history on youtube and right as rain now.
The bug is in the Webbrowsers, because a random webpage can do that.
Not really, my browser should happily comply if a webpage I want to use needs 7 GB.
No way YouTube needs 7 GB.
What reason would a single website have for requiring 7GB, and why should it be permitted?
i like that the explanation sounds like nobody is able to review the code and find / fix the bug.
Either someones on vacation or Gemini is still trying to vibecode a solution
the fact this thing has apparently been happening for a week or more is weird. It’s like front end web dev 101 type stuff and the fix is quite simple a junior should be able to take care of it. So I’m leaning towards it was an AI “assisted” bug and none of the vibe coders know how to fix it OR what’s more concerning to me is they no longer have any qualified front end devs on staff to fix it. you can literally fix this thing via your browsers dev tools right now if you wanted to.
but yeah if you start looking at the youtube front end you’ll quickly see it’s all pretty much vibe coded now. Another clear as day tell on that is the subscribed shorts section. it’s been broken for months and the fix is easy, they just don’t know how to do it.
If I were forced to pick, I’d say it was caused by an inexperienced programmer using AI that caused this.
It’s never just the developer that’s the problem. There should be a system in place to catch obvious bugs like this before they get anywhere near production. So there’s something not right in the company’s review and testing practices. And a bug like this, if it does sneak through, should be fixed very quickly. This has been up for a while so again there’s a problem with the company’s processes.
I’ve met a lot of really shitty developers.
Everyone’s shit sometimes (and some people often), which is why you always need non-shitty processes to preempt and/or catch the mistakes.
Everyone is shit sometimes, but some developers are shit all the time. Like, surprisingly so.
Ah I’ve been noticing some weird youtube behavior lately. This probably explains it.
Crazy something like this passes a company as large as Google’s eyes.
LLM is a he’ll of a drug my friend.
firefox freezes
kill PID with the highest CPU use
youtube tabs all crash
reload youtube video(s)
repeat in a couple of daysYou can use about:processes to do it all in browser before it freezes and yt is slow. Also shows you tab ram usage.
Oh no
anyway
Hmm maybe that’s why my chromecast has been crashing with YouTube (SmartTube) the past 1-2 days?
It’s an interface bug though. SmartTube uses its own interface and why would it even be affected by the YouTube web interface?
Does a thing like crowd-sourcing ram work? Is it a thing? This would probably be the symptoms though, yeah?
I guess I should have looked it up. https://en.wikipedia.org/wiki/List_of_volunteer_computing_projects
RAM’s main advantage over HDDs/SSDs is fast access times.
Needing to fetch anything over the internet would make it faster to just use HDDs.
You’re telling me I shouldn’t keep my swap file in the cloud?
Theoretically, you could do whatever processing you need using the user’s CPU and RAM and then send the result back over the Internet. Not saying that’s what’s happening, of course, but it’s not completely ridiculous.
That’s what distributed computing is, after all. Like Folding@Home.
Does GeForce Now support Chrome or Firefox?
But it isn’t this idea just kind of a reverse cloud? Since running AI is so expensive, they could “borrow” other people’s ram. Just an idea.
Sounds like you’re looking for zebras when horses are a much simpler explanation.
Conceptually what you’re describing is feasible, there’s lots of distributed computing projects that borrow compute/space/bandwidth for their ends but is unlikely to have any practical use.
If there were a distributed system that could be used as memory in a large virtual inferencing machine, it would be incredibly slow. The model would be stored across a large number of different computers which would all have to coordinate. Each step of inferencing would be orders of magnitude slower because the latency between two different computers is orders of magnitude slower than the latency between a GPU and physical RAM.
On the other end, if we just assume inferencing is feasible in reasonable time through some technique that isn’t public… a model that was large enough to take advantage of an Internet worth of memory doesn’t exist.
So, assuming you had access to the largest model that we know of, you would have a model as complex as Claude Opos, but would take hours or days to respond finish inferencing and the quality would be about the same as you could get in under a second for $20/mo.
And, going with a hypothetical ‘Internet Scale’ model.
First, it would have to be trained which would use take an incredibly long time. Some of these frontier models take months to train on the fastest hardware available, a larger model would take even longer to train due to the increased latency.
More importantly, there are strong diminishing returns on capability vs model size. This is why the AI companies are focusing on agentic tasks, where the AI spends a lot of time talking to itself and using tools, rather than pushing for a model with more parameters. This is referred to as the “scaling wall” (though AI companies, for obvious reasons, deny that such a thing exists and smoking companies say there’s no cancer risk for smokers).
It’s a neat idea (Skynet may be loose on the world, hiding out as widespread ‘bugs’ that happen to consume a lot of resources and compute), but it would require a lot of things to be magic’d into existence to be remotely practical.
You may find this funny: https://youtu.be/JcJSW7Rprio
Does a thing like crowd-sourcing ram work?
No.
Is it a thing?
No.
This would probably be the symptoms though, yeah?
No.
You seem very confused about what RAM is and what’s happening here. You seem to think that RAM is something you make on your computer. It’s a physical part of your computer that you load information into.
Imagine you’re sitting at a desk in an office. The desk has little shelves where you can put documents you’re working on. You can only put a small number of files there. The office has filing cabinets where other files are kept that you’re not working on. You can store a lot in there but it takes time to go find it. You also have some special filing cabinets that are still slow but you only use it to store files temporarily that someone brings you from another office, or when you run out of space on your desk but still need to keep files handy.
In this analogy, the shelves on the desk is RAM. You only put the stuff you’re immediately working on in those shelves because of the limited space, but it’s really fast to find stuff compared to the filling cabinets, which are your hard drive. When you go on a website, like YouTube, you’re calling someone in an office in another building and asking them for some files. They send over a bunch of files, which takes a really long time. You put a much as possible in your desk shelves to use right now, but anything that doesn’t fit you put in one of those special filing cabinets, which will call the cache, which is slow, but not nearly as slow as waiting for the files to come from the other office. When you’re ready for the extra files from YouTube, you just grab them from the cache.
What’s happening in this problem with youtube is that you request the files from them, they send them over, along with instructions on how to use them. The instructions say something that requires putting a bunch of things in RAM. At first this is normal. But at some point the instructions start repeating and tell you to put more and more files into RAM, maybe even repeats of files you already have there, shouldn’t need again. But you just follow instructions, that’s your job. So you keep loading things into RAM, but then there’s no room left and your system falls apart and you can no longer do any work. Until you close youtube and chuck all the youtube files out of RAM.
Hopefully that makes it clear why you can’t outsource RAM. Essentially you would be putting your little desk shelves in a different office, but we already have a better solution than that: the cache or special local filing cabinet on your hard drive.
What we outsource normally is the hard drive (filing cabinets) and call it cloud storage (for example), and the creation and processing of information (done by the CPU, GPU, or other chips on your computer) and call it cloud computing (for example). That’s because those things are slow, and the extra time to move the files between offices isn’t necessarily the bottleneck.















