Oh that would also make sense, yeah
Oh that would also make sense, yeah
I suspect that’s deliberate to make someone that speaks English and doesn’t know German still get the correct impression of what it actually sounds like, rather than get the spelling right
it means you’re getting fucked by them and not in a good way
So anal sex is a not-good way to have sex? Yeah sorry but that does sound pretty homophobic to me.
without lube
Ah, well that changes things. Anal without lube is a pretty universally bad experience, so sure, use that. But just framing being the receiving end of anal as bad without further context, we can do better than that, that’s all im saying
And Romans can’t be homophobic for some reason, or what’s your point?
Maybe we don’t need to resort to casual homophobia though to criticize corporates
This is not at all relevant to the comment you’re responding to. Your choice of password manager doesn’t change that whatever system you’re authenticating against still needs to have at least a hash of your password. That’s what passkeys are improving on here
I’m German, and I would not want that. German grammar works differently in a way that makes programming a lot more awkward for some reason. Things like, “.forEach” would technically need three different spellings depending on the grammatical gender of the type of element that’s in the collection it’s called on. Of course you could just go with neuter and say it refers to the “items” in the collection, but that’s just one of lots of small pieces of awkwardness that get stacked on top of each other when you try to translate languages and APIs. I really appreciate how much more straightforward that works with English.
You need both ends of the cable connected, so the phone is out. And even on PC, I’m not sure if it would work with the USB drivers in-between the software and the actual ports
Reading the article, it seems like it will actually be opt-in for everyone
Fortran is Proto-Indo-Germanic or whatever it’s called again
The algorithm is actually tailored to find out if/when you fall asleep while watching videos, and then recommends longer videos in autoplay when it believes you are, because they’ll get to play you more ads and cash out more.
You might be misremembering / misinterpreting a little there. This behavior is not intentional, it’s just a side effect of how the algorithm currently works. Showing you longer videos doesn’t equate to showing you more ads. On the contrary, if you get loads of short videos you’ll have way more opportunities to see pre-roll ads, but with longer videos, you’re just to just the mid-roll spots in that video. So YouTube doesn’t really have an incentive to make it work like that, it’s just accidental.
Here’s the spiffing Brit video on this, which I think you might have gotten this idea from: https://youtu.be/8iOjeb5DTZI
Edit: to be clear, I fully agree that YouTube will do anything to shove ads down our throats no matter how effective they actually are. I’m just saying that this example you’ve brought is not really that.
Oh awesome, thank you so much!
I’d love to know what font was used for the big “Saturday” there!
Now please explain to me how C works.
That’s not what they’re asking. It’s not about how C works, it’s about how specific APIs written in C work, which is hard to figure out on your own for anyone who is not familiar with that specific code. You’ll have to explain that to any developer coming new into the project expected to work with those APIs, no matter their experience with C.
It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.
That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.
LLMs do not make decisions.
What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.
That kind of window has been around for a long time already. Also, let me introduce you to window awnings
No, nutomic started by ignoring the actual question that was asked and instead starting an ideological argument with a bad faith question. You might say that OP shouldn’t have taken the bait, but this was not started by them
It still protects you from your passwords being compromised in any way except through a compromise of the password manager itself. Yes, it’s worse than keeping them separate, but it’s also still much better than not having 2fa at all.
Not really. Timezones, at their core (so without DST or any other special rules), are just a constant offset that you can very easily translate back and forth between, that’s trivial as long as you remember to do it. Having lots of them doesn’t really make anything harder, as long as you can look them up somewhere. DST, leap seconds, etc., make shit complicated, because they bend, break, or overlap a single timeline to the point where suddenly you have points in time that happen twice, or that never happen, or where time runs faster or slower for a bit. That is incredibly hard to deal with consistently, much more so that just switching a simple offset you’re operating within.
One country cozying up to Putin is hardly a reason to call the entire EU divided