• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • Eh, this is a thing, large companies often have internal rules and maximums about how much they can pay any given job title. For example, on our team, everyone we hire is given the role “senior full stack developer”, not because they’re particularly senior, in some cases we’re literally hiring out of college, but because it allows us to pay them better with internal company politics.


  • Makes sense. I wouldn’t think the average person taking on the exceptional training of Olympians would be good for you, but of those with the natural health and talent to try, Olympians are the ones who got that far without injuring themselves, and will therefore likely continue with some safe training with proper technique, and maintaining good health into old age. I’d imagine that benefit outweighs the damage extreme sports and training does to your body.

    I’d assume that measuring generally fit people who exercise regularly and eat well, without pursuing the extremism of world class athleticism, would live even longer on average.




  • Storytime! Earlier this year, I had an Amazon package stolen. We had reason to be suspicious, so we immediately contacted the landlord and within six hours we had video footage of a woman biking up to the building, taking our packages, and hurriedly leaving.

    So of course, I go to Amazon and try to report my package as stolen… which traps me for a whole hour in a loop with Amazon’s “chat support” AI, repeatedly insisting that I wait 48 hours “in case my package shows up”. I cannot explain to this thing clearly enough that, no, it’s not showing up, I literally have video evidence of it being stolen that I’m willing to send you. It literally cuts off the conversation once it gives its final “solution” and I have to restart the convo over and over.

    Takes me hours to wrench a damn phone number out of the thing, and a human being actually understands me and sends me a refund within 5 minutes.


  • I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.

    Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.






  • Yeah, that’s what burns the business relationship. Because now it’s not just “oh, Unity might screw me, and I’m investing in learning what could become a dead platform”, it’s “even if Unity doesn’t screw me now, they could randomly decide to screw me 10 years from now and retroactively charge me a king’s ransom”. That’s the stuff that has a permanent chilling effect on the whole platform.