• 2 Posts
  • 460 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • wizardbeard@lemmy.dbzer0.comtoComic Strips@lemmy.world‘THE LAST PRINGLE’ [OC]
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    ~~You ok CustardFist? You’re stuff’s always been weird but it seems to have taken a decidedly sexual slant in the ones I’ve seen lately.

    Also, there are impossibly few women turned on by having a doctor rooting around in their privates. That last facial expression overwhelmingly pushes the whole comic into the territory where I’m questioning if these aren’t some way of not so subtly sharing your personal kinks with the world.~~

    Edit: Nevermind, just checked your profile. For some reason a ton of your recent stuff hasn’t shown up during my doomscrolling. Guess it’s time to try a new sorting algorithm!




  • there was something I could somehow magically fix if I just kept pushing myself through the rock in my way.

    This is one of the worst “thought traps” out there. The biggest change in my life was when I decided to learn to work around/with my flaws rather than through/against them.

    I don’t mean give up and never try to improve, like a post I’ve seen here where someone got mad at their friends because their friends should just expect them to be late because ADHD. I mean stuff like that I set as many alarms and reminders as it takes, rather than deluding myself that “one alarm will be fine if I pay attention”.


  • So for those not familar with machine learning, which was the practical business use case for “AI” before LLMs took the world by storm, that is what they are describing as reinforcement learning. Both are valid terms for it.

    It’s how you can make an AI that plays Mario Kart. You establish goals that grant points, stuff to avoid that loses points, and what actions it can take each “step”. Then you give it the first frame of a Mario Kart race, have it try literally every input it can put in that frame, then evaluate the change in points that results. You branch out from that collection of “frame 2s” and do the same thing again and again, checking more and more possible future states.

    At some point you use certain rules to eliminate certain branches on this tree of potential future states, like discarding branches where it’s driving backwards. That way you can start opptimizing towards the options at any given time that get the most points im the end. Keep the amount of options being evaluated to an amount you can push through your hardware.

    Eventually you try enough things enough times that you can pretty consistently use the data you gathered to make the best choice on any given frame.

    The jank comes from how the points are configured. Like AI for a delivery robot could prioritize jumping off balconies if it prioritizes speed over self preservation.

    Some of these pitfalls are easy to create rules around for training. Others are far more subtle and difficult to work around.

    Some people in the video game TAS community (custom building a frame by frame list of the inputs needed to beat a game as fast as possible, human limits be damned) are already using this in limited capacities to automate testing approaches to particularly challenging sections of gameplay.

    So it ends up coming down to complexity. Making an AI to play Pacman is relatively simple. There are only 4 options every step, the direction the joystick is held. So you have 4n states to keep track of, where n is the number of steps forward you want to look.

    Trying to do that with language, and arguing that you can get reliable results with any kind of consistency, is blowing smoke. They can’t even clearly state what outcomes they are optimizing for with their “reward” function. God only knows what edge cases they’ve overlooked.


    My complete out of my ass guess is that they did some analysis on response to previous gpt output, tried to distinguish between positive and negative responses (or at least distinguish against responses indicating that it was incorrect). They then used that as some sort of positive/negative points heuristic.

    People have been speculating for a while that you could do that, crank up the “randomness”, have it generate multiple responses behind the scenes and then pit those “pre-responses” against each other and use that criteria to choose the best option of the “pre-responses”. They could even A/B test the responses over multiple users, and use the user responses as further “positive/negative points” reinforcement to feed back into it in a giant loop.

    Again, completely pulled from my ass. Take with a boulder of salt.




  • The world is a wildly different place now, and the people developing them were headed by people motivated by reasons other than extracting as much money out of the world at any cost.

    This is not nearly as comparable.


    Beyond that, very few people had an issue with AI as fuzzy logic and machine learning. Those techniques were already in wide use all over the place to great success.

    The term has been co-opted by the generative, largely LLM folks to oversell the product they are offering as having some form of intelligence. They then pivot to marketing it as a solution to the problem of having to pay people to talk, write, or create visual or audio media.

    Generally, people aren’t against using AI to simulate countless permutations of motorcycle frame designs to help discover the most optimal one. They’re against wholesale reduction in soft skill and art/content creation jobs by replacing people with tools that are definitively not fit to task.

    Pushback against non-generative AI, such as self-driving cars, is general fatigue at being sold something not fit to task and being told that calling it out is being against a hypothetical future.








  • I feel like that’s a stretch, and there’s some important things to consider here.

    People are weird, and can fetishize all sorts of shit. There’s no reasonable way to control say, someone jerking off to pictures of hand models. Or to stop someone shlicking it to your shlubby beer gut at the beach photos you put up on social media if that’s their thing (and I know a woman who’s thing was “straight bears” for a long time).

    But no one has any agency or ability to prevent that. No one has any agency to prevent any random person passing them on the street and then later using that memory plus imagination as cranking fuel.

    For the sake of every individual’s personal sanity, I think it’s important that each and every one of us understand and accept that. Existing in the world is naturally giving up a certain amount of control. This is part of it, as disgusting as it is.

    This is even more the case when you put content out there. Whether through acting in film or other media, creating artwork, posting pictures, etc. Creating content in the current age of the internet is inherently ceding ownership and control over it. The moment it hits the public space, you cannot control what is done with it, and the sooner people can learn to accept that, the better off I think we all will be.


    I understand that feeling of violation to learn that someone has used you purely as an object for arousal.

    abuse

    Multiple times an ex manually stimulated me to physical arousal and used me as a human dildo. At the time I convinced myself I was into it, because I was a guy. I wasn’t, and while my trauma is relatively minor, it exists.


    That said, there is nuance. This content was not edited, it was merely taken out of the original context. Are we going to prevent news from doing this to prevent using content in ways unintended and unanticipated by the original creators?

    “I’ll know misuse when I see it” is not a sustainable method for evaluating misuse at scale.

    “If it’s clearly being used for erotic purposes” likewise doesn’t work, as defining that line isn’t straightforward. Do we ban reposts of bikini shots?

    This isn’t something that was created for private use that was leaked. It was content made for public consption. Being disgusted with how the public chooses to consume it is your right, but there’s no way to control that.

    Again, I entirely sympathize with the women experiencing this. Being used in this manner is dehumanizing.

    But there’s no stopping it. Best to accept as best you can and ignore it.





  • PowerShell variable names and function names are not case sensitive.

    I understand the conventions of using capitalization of those names having specific meanings in regards to things like constants, but the overwhelming majority of us all use IDEs now with autocomplete.

    Personally, I prefer to use prefixes anyway to denote that info. Works better with segmenting stuff for autocomplete, and has less overhead of deriving non-explicit meaning from stuff like formatting or capitalization choices.

    On top of that, you really shouldn’t be using variables with the same name but different capitalization in the same sections of code anyway. “Did I mean to use $AGE, $Age, or $age here?” God forbid someone come through to enforce standards or something and fuck that all up.