• mctoasterson@reddthat.com
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    1
    ·
    7 days ago

    What people don’t want is blackbox AI agents installed system-wide that use the carrot of “integration and efficiency” to justify bulk data collection, that the end user implicitly agrees to by logging into the OS.

    God forbid people want the compute they are paying for to actually do what they want, and not work at cross purposes for the company and its various data sales clients.

        • Gsus4@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          7 days ago

          But hey, now ads load much faster and relevant to you, making everything snappier and slicker! Who wouldn’t pay more for such an upgrade???

    • village604@adultswim.fan
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 days ago

      I think you’re making the mistake of thinking the general population is as informed or cares as much about AI as people on Lemmy.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      God forbid people want the compute they are paying for to actually do what they want, and not work at cross purposes for the company and its various data sales clients.

      I think that way of thinking is still pretty niche.

      Hope it’s becoming more widespread, but in my experience most people don’t actually concern themselves with “my device does some stuff in the background that goes beyond what I want it for” - in their ignorance of Technology, they just assume it’s something that’s necessary.

      I think were people have problems is mainly at the level of “this device is slower at doing what I want it to do than the older one” (for example, because AI makes it slower), “this device costs more than the other one without doing what I want it to do any better” (for example, they’re unwilling to pay more for the AI functionality) or “this device does what I want it to do worse than before/that-one” (for example, AI is forced on users, actually making the experience of using that device worse, such as with Windows 11).

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    23
    ·
    6 days ago

    What a trash click bait headline. That’s not how the statement “saying the quiet part out loud” works. This isn’t a secret and it’s not unspoken and it certainly doesn’t not reveal some underlying motive.

  • UsoSaito@feddit.uk
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    6 days ago

    It doesn’t confuse us… it annoys us with the blatant wrong information. e.g. glue is a pizza ingredient.

        • Oascany@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          6 days ago

          It doesn’t, it generates incorrect information. This is because AI doesn’t think or dream, it’s a generative technology that outputs information based on whatever went in. It can’t hallucinate because it can’t think or feel.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            6 days ago

            Hallucinate is the word that has been assigned to what you described. When you don’t assign additional emotional baggage to the word, hallucinate is a reasonable word to pick to decribe when an llm follows a chain of words that have internal correlation but no basis in external reality.

            • Oascany@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              6 days ago

              Trying to isolate out “emotional baggage” is not how language works. A term means something and applies somewhere. Generative models do not have the capacity to hallucinate. If you need to apply a human term to a non-human technology that pretends to be human, you might want to use the term “confabulate” because hallucination is a response to stimulus while confabulation is, in simple terms, bullshitting.

              • Blue_Morpho@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                6 days ago

                A term means something and applies somewhere.

                Words are redefined all the time. Kilo should mean 1000. It was the international standard definition for 150 years. But now with computers it means 1024.

                Confabulation would have been a better choice. But people have chosen hallucinate.

                • mushroommunk@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 days ago

                  Although I agree with you, you chose a poor example.

                  Kilo doesn’t mean 1024, that’s kibi. Many of us in tech differentiate because it’s important.

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    edit-2
    7 days ago

    Doesn’t confuse me, just pisses me off trying to do things I don’t need or want done. Creates problems to find solutions to

      • UsoSaito@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        No as it doesn’t compute graphical information and is solely for running computations for “AI stuff”.

        • Gsus4@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          6 days ago

          GPUs aren’t just for graphics. They speed up vector operations, including those used in “AI stuff”. I just never heard of NPUs before, so I imagine they may be hardwired for graph architecture of neural nets instead of linear algebra, maybe, so that’s why they can’t be used as GPUs.

          • JATth@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 days ago

            Initially, x86 CPUs didn’t have a FPU. It cost extra, and was delivered as a separate chip.

            Later, GPU is just a overgrown SIMD FPU.

            NPU is a specialized GPU that operates on low-precision floating-point numbers, and mostly does matrix-multiply-and-add operations.

            There is zero neural processing going on here, which would mean the chip operates using bursts of encoded analog signals, within power consumption of about 20W, and would be able to adjust itself on the fly online, without having a few datacenters spending exceeding amount of energy to update the weights of the model.

            • UsoSaito@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              NPUs do those calculations far more effectively than a GPU though is what I was meaning.

  • InFerNo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    7 days ago

    “Recall was met with serious backlash”. Meanwhile I’m looking for a simple setting regarding the power button on my wife’s phone and stumble upon a setting that is enabled by default that has Gemini scanning the screen and using it for whatever it is that it does, but my wife doesn’t use any AI features on her device. Correct me if I’m wrong, but isn’t this basically the same as Recall? Google was just smart enough to silently roll this out.

    • Xanvial@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 days ago

      Isn’t this only triggered when user use Gemini (and the google assistant before). To use something like circle to search. I’m rather sure this already exists before AI craze

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 days ago

      It’s such a stupid approach to the stated problem that I just assumed it was actually meant for something else and the stated problem was to justify it. And made the decision to never use win 11 on a personal machine based on this “feature”.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      So, it’s not really a problem I’ve run into, but I’ve met a lot of people who have difficulty on Windows understanding where they’ve saved something, but do remember that they’ve worked on or looked at it at some point in the past.

      My own suspicion is that part of this problem stems from the fact that back in the day, DOS had a not-incredibly-aimed-at-non-technical-users filesystem layout, and Windows tried to avoid this by hiding that and stacking an increasingly number of “virtual” interfaces on top of things that didn’t just show one the filesystem, whether it be the Start menu or Windows Explorer and file dialogs having a variety of things other than just the filesystem to navigate around. The result is that you have had Microsoft banging away for much of the lifetime of Windows trying to add more ways to access files, most of which increase the difficulty of actually understanding what is going on fully through the extra layers. But regardless of why, some users do have trouble with it.

      So if you can just provide a search that can summon up that document where they were working on that had a picture of giraffes by typing “giraffe” into some search field, maybe that’ll do it.

  • Electricd@lemmybefree.net
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    6 days ago

    I want to run LLMs locally, or things like TTS or STT locally so it’s nice but there’s no real support rn

    Most people won’t care nor use it

    LLMs are best used when it’s a user choice, not a platform obligation

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I guess an NPU is better of being a PCIe peripheral then?
      And it can then have their specialised RAM too.

      • Electricd@lemmybefree.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Sorry, I’m not a hardware expert at all

        When you’re talking about the PCIe peripheral, you’re talking about a separate dedicated graphics card or something else?

        I guess the main point of NPUs are that they are tiny and built in

        • ulterno@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          When you’re talking about the PCIe peripheral, you’re talking about a separate dedicated graphics card or something else?

          Yes, similar to what a PCIe Graphics Card does.
          A PCIe slot is the slot in a desktop motherboard that lets you fit various things like networking (ethernet, Wi-Fi and even RTC specialised stuff) cards, sound cards, graphics cards, SATA/SAS adapters, USB adapters and all other kinds of stuff.

          I guess the main point of NPUs are that they are tiny and built in

          GPUs are also available built-in. Some of them are even tiny.
          Go 11-12 years back in time and you’ll see video processing units embedded into the Motherboard, instead of in the CPU package.
          Eventually some people will want more powerful NPUs with better suited RAM for neural workloads (GPUs have their own type of RAM too), not care about the NPU in the CPU package and will feel like they are uselessly paying for it. Others will not require an NPU and will feel like they are uselessly paying for it.

          So, much better to have NPUs be made separately in different tiers, similar to what is done with GPUs rn.

          And even external (PCIe) Graphics Cards can be thin and light instead of being a fat package. It’s usually just the (i) extra I/O ports and (ii) the cooling fins+fans that make them fat.

          • Electricd@lemmybefree.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            Thanks for your answer

            So, much better to have NPUs be made separately in different tiers, similar to what is done with GPUs rn.

            Overall yea, but built-in graphics are remarkably efficient and they have the added benefit to be here even if you didn’t plan that use initially. I’m glad to be able to play video games on my laptop that was meant to be use for work only

            Similarly, I had no interest in getting an NPU for this laptop but I found some use to it (well, when it’ll finally support what I want to do)

            Manufacturers will never include a niche option, or will overprice it. Built in allows to get that directly

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    Yeah, I’m not sure what the point of a cheap NPU is.

    If you don’t like AI, you don’t want it.

    If you do like AI, you want a big GPU or to run it on somebody else’s much bigger hardware via the internet.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      A cheap NPU could have some uses. If you have a background process that runs continuously, offloading the work to a low-cost NPU can save you both power and processing. Camera authorization, if you get up, it locks; if you sit down, it unlocks. No reason to burn a core or GPU for that. Security/Nanny cameras recognition. Driving systems monitoring a driver losing consciousness and pulling over. We can accomplish this all now with CPUs/GPUs, but purpose-built systems that don’t drain other resources aren’t a bad thing.

      Of course, there’s always the downside that they use that chip for recall. Or malware gets a hold of it for recall, ID theft, There’s a whole lot of bad you can do with a low-cost NPU too :)

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    11
    ·
    7 days ago

    As time goes by I’m finding a place for AI.

    1. I use it for information searches, but only in cases where I know the information exists and there is an actual answer. Like history questions or asking for nuanced definitions of words and concepts.

    2. I use it to manipulate documents. I have a personal pet peeve about the format of most recipes for example. Recipes always list the ingredient amounts in a table at the top, but then down in the steps they just say “add the salt” or “mix in the flour.” Then I have to look up at the chart and find the amount of salt/flour, and then I lose my place in the steps and have to find it again. I just have AI throw out the chart and integrate the amounts into the steps: “mix in 2 cups of flour”. I can have it shorten the instructions too and break them into easier to read bullet points. I also ask it to make ingredient substitutions and other modifications. The other day I gave it a bread recipe and asked it to introduce a cold-proofing step and reformat everything the way I like. It did great.

    3. Learning interactively. When I need to absorb a new skill or topic I sometimes do it conversationally with AI. Yes I can find articles and videos but then I am stuck with the information they lay out and the pace and order in which they do it. With AI you can stop and ask clarifying questions, or have it skip over the parts you already know. I find this is way faster than laborious googling. However only trust it for very straightforward topics. Like “explain the different kinds of welding and what they are for.” I wouldn’t trust it for more nuanced topics where perspective and opinion come into it. And I’ve leaned that it isn’t great at topics where there isn’t enough information out there. Like very niche questions about the meta of a certain video game that’s only been out a month.

    4. Speech to text and summarization. AI records all my Zoom meetings for work and gives summaries of what was discussed and next steps. This is always better than nothing. I’m also impressed with how it seems to understand how to discard idle chit chat and only record actual work content. At most it says “the meeting began with coworkers exchanging details from their respective weekends.”

    This kind of hard-and-fast summarization and manipulation of factual text is much easier with AI. Doing my job for me? No. Hovering over my entire computer? No. Writing my emails for me? Fuck off.

    The takeaway is that specific tools I can go to when I need them, for point-specific needs, is all I want. I don’t need or what a hovering AI around all the time, and I don’t want whatever tripe Dell can come up with when I can get the best latest models direct from the leading players.

    • Lfrith@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      Extent of my comfort with AI is through website and interaction is limited to copy and paste or upload. Capabilities not running on a system level.

      But, when it comes to actually running on hardware and being able to do things by reading what is on the screen or hearing what is said I don’t trust AI to be secure or privacy respecting. When it comes to that type of functionality I’ll only trust ones that is compiled myself to run locally as opposed to provided by a corporations who are largely in the business of data collection.

    • phil@lymme.dynv6.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 days ago

      Assuming you keep a critical eye on the results, surely AI can be used for some meaningful things like the ways you found - thanks for sharing them. But i could bet that most people will be stuck at the BS generator level with its poisonous effects on them and the society at large.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        I agree. I share my use cases mostly to put the critical thinking behind them on display. I’m sure the crowd here is very savvy. But in the general public I agree that many if not most people would be completely seduced by the obsequious & confident tone of the robot. It can do so many things that it becomes tempting to rely on it. You wish it worked better than it did, and if you let yourself get lazy, you can easily slip into trusting it too much.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago
    > be me
    > installed VScode to test whether language server is just unfriendly with KATE
    > get bombarded with "try our AI!" type BS
    > vomit.jpg
    > managed to test it, but the AI turns me off
    > immediately uninstalled this piece of glorified webpage from my ThinkPad
    

    It seems I’m having to do more jobs with KATE. (Does the LSP plugin for KATE handle stuff differently from the standard in some known way?)

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      If you’re still reading this: I modified the code of the language server, so it now works with KATE…

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    7 days ago

    It is going to take at least five years before local AI is user-friendly enough and with performant hardware circulating, that ordinary folks would consider buying an AI machine.

    I have a top-end DDR4 gaming rig. It takes a long time for 100b sized AI to give some roleplaying output, at least forty minutes for my settings via KoboldCPP with a GGUF. I don’t think a typical person would want to wait more than 2 minutes for a good response. So we will need at least DDR6 era devices before it is practical for everyday people.

    • lmuel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      7 days ago

      A local LLM is still an LLM… I don’t think it’s gonna be terribly useful no matter how good your hardware is

      • maus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        7 days ago

        I have great success with local LLM in some of my workflows and automation.

        I use it for my line completion and basic functions/asks while developing that I dont want to waste tokens on.

        I also use it in automation. I run my own media server with a few dozen people with an automated request system “jellyseerr” that adds content. I have automation that leverages local LLM to look at recent media requests and automatically requests content that is similar to it.

      • luridness@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for… let’s say better predictive text… it’s already somewhat AI based I just would like it to be. Better

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        The diminishing returns are kind of insane if you compare the performance and hardware requirements of a 7b and 100b model. In some cases the smaller model can even perform better because it’s more focused and won’t be as subtle about its hallucinations.
        Something is going to have to fundamentally change before we see any big improvements, because I don’t see scaling it up further ever producing AGI or even solving any of the hallucinations/ logic errors it makes.

        In some ways it’s a bit like the Crypto blockchain speculators saying it’s going to change the world. But in reality the vast majority of applications proposed would have been better implemented with a simple centralized database.