• 7 Posts
  • 1.84K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle
  • Depends how much you have to pay attention.

    First off, I am not a fitness expert. YMMV.

    But sometimes I do variations of bodyweight exercises in front of a TV, yes.

    One day, for example, might be arm day. I sit and do leg curls for biceps. I straight pushups or tricep dips, use a pull-up bar if I have one; even just hanging is great.

    Another day might be push up variation day; wide, narrow, inclined different ways, push up and “reach to the sky with one arm,” knee pushups at the end.

    Yet another is leg day. Squats, jumping squats, lunges, butt kicks, heel lifts, other positions to get different muscles. Another day may be core, another day is more shoulder/back, and so on. And all this is without weights, or with at most like a dumbbell or a pull up bar, and some kind of chair or bed for certain positions.

    Your eyes will drift away from the TV, and you get exhausted doing this stuff, but you can keep up with a show if you want.




  • Sweat is not a bad thing. It means your heart is pumping; what you want for weight loss.

    That being said, I love exercising in cold weather, if you’re somewhere where you get any. Warm up a little inside, go out, and it just feels fantastic.

    And that doesn’t just mean running a marathon. It can be calisthenics in a back yard, or garage, or even just walking out to a spot where you can jog.


    While I’m here, let me glaze bodyweight exercises, like push ups, squats, kicks, core stuff, and all the variants. Do them in sets, one “group” a day.

    It’s amazingly efficient. It gets you out of breath like running, but gets muscles sore like a weight machine, all in less time. And it’s waaay less stressful on your body than running or big weights.





  • I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?

    Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.



  • “It’s just a meme,” has the same energy as a confronted schoolyard bully, and apparently that’s just accepted etiquette now :/

    Ive seen the same sentiment on Lemmy. Manipulative/misleading articles or even straight up misinformation is posted, and when I bring it up, OP’s response is “I don’t care.” As long as it’s the right ideology, it’s alright; and mods didn’t disagree.


    …We’re so screwed, aren’t we? And by “we” I mean the internet. It’s nice to think of the Fediverse an oasis from all this, but it engineered in the same structural issues commercial social media has, I think.



  • Oh man, you’re missing out on OLED in a basement though. It’s so fantastic for dimly lit evironments that I’d take the smaller size any day, and just sit a little closer. LCDs, on the other hand, look pretty terrible in a really dark room.

    The only thing that would make me pause is if you’re trying to squeeze a big family around the TV. In that case, it would make sense to get a bigger one, so everyone can sit farther back without compromised viewing angles.


  • TVs are fantastic monitors.

    It sounds reasonable to not want to pay for basically a small computer inside the TV.

    But in practice, its not that expensive of a component. And TV volumes are so high that they’re bigger and cheaper and higher quality than an equivalently priced monitor, anyway.

    Hence, while I’m fine with the monitors I have, I’m never buying a “monitor” again. It just makes no financial sense when I can get a 40" 4K TV with 120hz VRR instead, that happens to work fantastically as a streaming box too.



  • As a real life example, the Canon 600mm F11 telephoto lens should be awful, on, say, a 32MP crop sensor R7. That’s insane pixel density somewhere in the ballpark of this Fuji.

    …But look at real life shots of that exact combo, and they’re sharp as hell. Sharper than a Sigma at F6.3.


    The diffraction limit is something to watch out for, but in reality, stuff like the lens imperfections, motion blur, atmospheric distortion and such are going to get you first. You don’t need to shoot at F4 on this thing to make use of the resolution, even if that is the ideal scenario.





  • You mean an Nvidia 3060? You can run GLM 4.6, a 350B model, on 12GB VRAM if you have 128GB of CPU RAM. It’s not ideal though.

    More practically, you can run GLM Air or Flash quite comfortably. And that’ll be considerably better than “cheap” or old models like Nano, on top of being private, uncensored, and hackable/customizable.

    The big distinguishing feature is “it’s not for the faint of heart,” heh. It takes time and tinkering to setup, as all the “easy” preconfigurations are suboptimal.


    That aside, even you have a toaster, you can invest a in API credits and run open weights models with relative privacy on a self hosted front end. Pick the jurisdiction of your choosing.

    For example: https://openrouter.ai/z-ai/glm-4.6v

    It’s like a dollar or two per million words. You can even give a middle finger to Nvidia by using Cerebras or Groq, which don’t use GPUs at all.


  • Yeah, accessibility is the big problem.


    What I used depends.

    For “chat” and creativity, I use my own version of GLM 4.6 350B quantized to just barely fit in 128GB RAM/24GB VRAM, with a fork of llama.cop called ik_llama.cpp:

    https://huggingface.co/Downtown-Case/GLM-4.6-128GB-RAM-IK-GGUF

    It’s complicated, but in a nutshell, the degradation vs the full model is reasonable even though it’s like 3 bits instead of 16, and it runs at 6-7 tokens/sec even with so much in CPU.

    For the UI, it varies, but I tend to use mikupad so I can manipulate the chat syntax. LMStudio works pretty well though.


    Now, for STEM stuff or papers? I tend to use Nemotron 49B quantized with exllamav3, or sometimes Seed-OSS 36B, as both are good at that and at long context stuff.

    For coding, automation? It… depends. Sometimes I used Qwen VL 32B or 30B, in various runtimes, but it seems that GLM 4.7 Flash and GLM 4.6V will be better once I set them up.

    Minimax is pretty good at making quick scripts, while being faster than GLM on my desktop.

    For a front end, I’ve been switching around.

    I also use custom sampling. I basically always use n-gram sampling in ik_llama.cpp where I can, with DRY at modest temperatures (0.6?). Or low or even zero temperature for more “objective” things. This is massively important, as default sampling is where so many LLM errors come from.

    And TBH, I also use GLM 4.7 over API a lot, in situations where privacy does not matter. It’s so cheap it’s basically free.


    So… Yeah. That’s the problem. If you just load up LMStudio with its default Llama 8B Q4KM, it’s really dumb and awful and slow. You almost have to be an enthusiast following the space to get usable results.