

Looking into it. I’m happy with ALVR, but I didn’t see any reason not to check out alternatives :)
Looking into it. I’m happy with ALVR, but I didn’t see any reason not to check out alternatives :)
Yeah, Nvidia and Linux drivers are a bad combo. I’ve been pretty much using GeForce since the first one, because notoriously the ATI/AMD drivers were not great. Then I switched to Linux and surprise! Sometimes when they update they’re so bad the entire OS stutters. Needless to say, I have an AMD one now… (and can vouch for ALVR! Not as polished as Virtual Desktop, but works great)
I don’t know how the various options here work, BUT you might also appreciate them too https://libredirect.github.io/index.html (this is where I found the other link)
Seems a good time to drop this here https://breezewiki.com/
That, or it’ll stay because they don’t see why other companies should use their tech to harvest data for themselves, while Google gets nothing.
This reminds me that it’s a new month, and time for a backup. Thanks!
Honestly? Research. It might not make sense here and now, but it might help something more meaningful down the line.
Or vr porn lmao
That’s an incredibly fun game. I had the Oculus Go version, the one for the Quest 2 was different and I bought it again. Now it’s getting on Steam? Maybe I’ll wait for a sale as it’d be the third version I buy, but….
Had access to cli, restarted HA and quickly disabled the Alexa integration: so far everything is working as intended :)
Similarly unfortunate situation for me, using the backup didn’t really help. But I DO have the Alexa integration, I guess next time I get HA between reboots I’ll disable that.
I think on my system it’s causing reboots. Not fun.
Fortunately my thermometers don’t do that, because they are a good choice, Zigbee wise. Always on the lookout for replacements, if the need arises…
The bloody morons… why they say 16 tops if it can do better? It’s not like they don’t have access to 16gb sticks to test 2 of them! Like, I get when it’s “this supports up to” and that’s the largest available at launch, but this is just stupid. Thanks for correcting me!
super easy to upgrade to 32/48gb
Not on an N95/97/100 as they support max 16… https://ark.intel.com/content/www/us/en/ark/products/231803/intel-processor-n100-6m-cache-up-to-3-40-ghz.html so they can be repaired, but not upgraded.
Sounds about right. But a multimodal one? Ehh… sticking with Meta, their smallest LLaMa is a 7b, and as such without any multimodal features it’s already going to use most of the Quest’s 8gb and it would be slow enough that people wouldn’t like it. Going smaller is fun, for example I like (in the app I linked) to use a 1.6b model, it’s virtually useless but it sure can summarize text. And to be fair, there are multimodal ones that could run on the Quest (not fast), but going small means lower quality. For example the best one I can run on my phone takes… maybe 20 seconds? To generate this description “ The image shows three high-performance sports cars racing on a track. The first car is a white Lamborghini, the second car is a red Ferrari, and the third car is a white Bugatti. The cars are positioned in a straight line, with the white Lamborghini in the lead, followed by the red Ferrari, and then the white Bugatti. The background is blurred, but it appears to be a race track, indicating that the cars are racing on a track.” and it’s not bad. But
I’m not sure I’d call it trustworthy :D
If you happen to have an iPhone and want to get a sense of how difficult to run are LLM on a mobile device, there’s a free app https://apps.apple.com/app/id6497060890 that allows just that. If your device has at least 8gb of memory then it will even show the most basic version of LLaMa (just text there), and since everything is done on device, you can even try it in airplane mode. Less meaningful would be running it on a computer, for that I suggest https://lmstudio.ai/ that is very easy to use.
Running locally a LLM is quite the task, and the Quest 3 is way underspecced to do what’s being added. It’ll be the usual sending a picture with a request, having the servers process it and then receiving a response… so while I don’t have any doubt they’ll use all data they receive, the device itself isn’t going to do anything aside from sending pictures.
Very, very tempting.
The wireless Reolinks are not great. Like, nothing against the camera itself, however, the WiFi antenna must be smaller than the one in my watch. The range is truly abysmal.
Tapo ones are perfectly fine, good quality all around, but it’s important to specify that I have experience with external, solar powered Reolinks and indoor Tapo, so it’s a bit of a different category.
I have Zigbee stuff. Here’s what I like about that: they bridge between each other (if they’re powered rather than battery operated) and that extends the range. The range can be great to begin with! They’re not on my network, adding confusion or load on the access points, plus they can’t phone home… all local. Then there’s smart switches, and I’m going to point out that without WiFi they can’t be controlled, but Zigbee? Sure, I can easily power cycle my router and access points with Zigbee smart plugs! In fact I have an automation to do that daily. Finally, if something is WiFi you can’t know in advance if it’s cloud based or not, and regardless of that it’s a potentially unsafe device that is connected to the internet. Low power, but botnets work with numbers rather than power.