Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 754 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle
  • I agree that the results will be different, and certainly a very narrowly trained LLM for conversation could have some potentials if it has proper guardrails. So either way there’s a lot of prep beforehand to make sure the boundaries are very clear. Which would work better is debatable and depends on the application. I’ve played around with plenty of fine tuned models, and they will get off track contextually with enough data. LLMs and procedural generation have a lot in common, but the latter is far easier to manage predictable outputs because of how the probability is used to create them.




  • That’s a reasonable definition. It also pushes things closer to what we think we can do now, since the same logic makes a slower AGI equal to a person, and a cluster of them on a single issue better than one. The G (general) is the key part that changes things, no matter the speed, and we’re not there. LLMs are general in many ways, but lack the I to spark anything from it, they just simulate it by doing exactly what your point is, being much faster at finding the best matches in a response in data training and appearing sometimes to have reasoned it out.

    ASI is a definition only in scale. We as humans can’t have any idea what an ASI would be like other than far superior than a human for whatever reasons. If it’s only speed, that’s enough. It certain could become more than just faster though, and that added with speed… naysayers better hope they are right about the impossibilities, but how can they know for sure on something we wouldn’t be able to grasp if it existed?


  • I doubt the few that are calling for a slowing or all out ban on further work on AI are trying to profit from any success they have. The funny thing is, we won’t know if we ever hit that point of even just AGI until we’re past it, and in theory AGI will quickly go to ASI simply because it’s the next step once the point is reached. So anyone saying AGI is here or almost here is just speculating, just as anyone who says it’s not near or won’t ever happen.

    The only thing possibly worse than getting to the AGI/ASI point unprepared might be not getting there, but creating tools that simulate a lot of its features and all of its dangers and ignorantly using them without any caution. Oh look , we’re there already, and doing a terrible job at being cautious, as we usually are with new tech.




  • Places would change up and down in tone and attitude since the days of Usenet, BBSes, and FidoNet. It’s not the platform, it’s the people. How the world is in RL affects how people talk online, and the world changes over time.

    The simpler answer may be that your feed has changed some since you started and you’re pulling in discussions that have a different vibe than when you started. Just as you can grow your feed by browsing around, you can cull certain places that tend to be darker by blocking people or instances.













  • Rhaedas@fedia.iotoGaming@lemmy.worldGame balance
    link
    fedilink
    arrow-up
    29
    ·
    17 days ago

    I ran into this long ago in Ultima Online and Everquest days. “Balance” does not mean “even”, guys. Sometimes an overpowering thing needs to be overpowering, and “balance” it in some other way. The term “nerfing” was created in UO for this very act of devs bending to the will of whiners instead of reexamining the game dynamics (if any change was needed at all).