• 6 Posts
  • 825 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)

    With this framework, specifically: https://github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-file

    The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits average, instead of 8 bits like the full model.


    That’s just a hack for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2, and Deepseek explicitly architected it to be really fast at scale. It is “lightweight” in a sense.


    …But if you have a “sane” system, it’s indeed a bit large. The best I can run on my 24GB vram system are 32B - 49B dense models (like Qwen 3 or nemotron), or 70B mixture of experts (like the new Hunyuan 70B).


  • DeepSeek, now that is a filtered LLM.

    The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.

    There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to “improve its risk profile”:

    https://huggingface.co/microsoft/MAI-DS-R1

    https://huggingface.co/perplexity-ai/r1-1776

    That’s the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.


    Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.

    Instruct LLMs aren’t trained on raw data.

    It wouldn’t be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked “anti woke” data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.


    …Not that I don’t agree with you in principle. Twitter is a terrible source for data, heh.




  • brucethemoose@lemmy.worldtoScience Memes@mander.xyzJupiter
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    5 days ago

    The junocam page has raw shots from the actual device: https://www.msss.com/all_projects/junocam.php

    Caption of another:

    Multiple images taken with the JunoCam instrument on three separate orbits were combined to show all areas in daylight, enhanced color, and stereographic projection.

    In other words, the images you see are heavily processed composites…

    Dare I say, “AI enhanced,” as they sometimes do use ML algorithms for astronomy. Though ones designed for scientific usefulness, of course, and mostly for pattern identification in bulk data AFAIK.




  • This is why work/life balance is so important. I wouldn’t ever call myself “well-off” but I don’t have kids and my job allows me ample time off to play games and watch movies and shit.

    Neither do they! They aren’t workaholics, they’re home bodies that work the least they can!

    It’s just that the workplaces are shit. One went back to mandated RTO for no reason even though much of the work is overseas at odd hours. The company’s literally trying to make employees miserable so they quit without severence. The other is work-from-home, but with enough pointless meetings and complete workplace dysfunction to eat energy.

    And these seem like well above average jobs.



  • +1 to literally everything.

    Fuck brand recognition or loyalty, fuck development talent, fuck community building, fuck long-term strategy, we can realize a gain right now by sowing half the planet with salt, so that’s what we’re going to do. So what is there for people to buy?

    I wish this would fit on a bumpersticker.

    That noise you heard last week was Xbox’s death rattle. One out of the three mainstream home console platforms is an outright stupid idea to buy now.

    And wasn’t Sony the big risk of bowing out before? And then we got the Switch 2… It’s remarkable that Microsoft somehow made Xbox the least likely to survive.


  • Single data point: my young, working, well off gaming part of my family is just out of energy. It’s easier to watch a YouTube video instead of TV or gaming, before then falling asleep to wake up for work. Seems like much of their circle is similar.

    As for myself, I’m going through a, uh, icky phase of life and am not really motivated to play unless it’s coop.

    …Maybe others are struggling similarly?


    Also, the games we do look at tend to be from indie to mid-size studios, with BG3 and KCD2 being the only recent exceptions.



  • brucethemoose@lemmy.worldtoScience Memes@mander.xyzRIP America
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    edit-2
    8 days ago

    Most of the US believes in this, or is just unaware. That’s how its been for most of history around the world.

    …The remarkable issue here is the elites/rules we handed the reigns now drink their own kool-aid. The very top of most authoritarian regimes are at least cognisant of some hypocrisy, even if ideology eats them some.

    The other is that people are more ‘connected’ than ever, but to disinformation streams. I feel like a lot of the world (especially the US fancies) themselves as super smart on shit they know nothing about because of something they saw on Facebook or YouTube.



  • ChatGPT (last time I tried it) is extremely sycophantic though. Its high default sampling also leads to totally unexpected/random turns.

    Google Gemini is now too.

    And they log and use your dark thoughts.

    I find that less sycophantic LLMs are way more helpful. Hence I bounce between Nemotron 49B and a few 24B-32B finetunes (or task vectors for Gemma) and find them way more helpful.

    …I guess what I’m saying is people should turn towards more specialized and “openly thinking” free tools, not something generic, corporate, and purposely overpleasing like ChatGPT or most default instruct tunes.


  • TBH this is a huge factor.

    I don’t use ChatGPT much less use it like it’s a person, but I’m socially isolated at the moment. So I bounce dark internal thoughts off of locally run LLMs.

    It’s kinda like looking into a mirror. As long as I know I’m talking to a tool, it’s helpful, sometimes insightful. It’s private. And I sure as shit can’t afford to pay a therapist out of the gazoo for that.

    It was one of my previous problems with therapy: payment depending on someone else, at preset times (not when I need it). Many sessions feels like they end when I’m barely scratching the surface. Yes therapy is great in general and for deeper feedback/guidance, but still.


    To be clear, I don’t think this is a good solution in general. Tinkering with LLMs is part of my living, I understand the jist of how they work, I tend to use raw completion syntax or even base pretrains.

    But most people anthropomorphize them because that’s how chat apps are presented. That’s problematic.





  • Yeah, just paying for LLM APIs is dirt cheap, and they (supposedly) don’t scrape data. Again I’d recommend Openrouter and Cerebras! And you get your pick of models to try from them.

    Even a framework 16 is not good for LLMs TBH. The Framework desktop is (as it uses a special AMD chip), but it’s very expensive. Honestly the whole hardware market is so screwed up, hence most ‘local LLM enthusiasts’ buy a used RTX 3090 and stick them in desktops or servers, as no one wants to produce something affordable apparently :/