I’m afraid all we can say for sure is that it’s not a caterpillar.
Fundamentally I agree that work shouldn’t need to be a priority in this situation if the individual doesn’t want it to be, but this is like basically the optimal scenario. I wish more companies respected their employees’ time and strictly valued results over the appearance of business.
Spider-Man: Miles Morales x BNA. Of all the superheroes this is probably one of the best crossovers, being a city-bound teenager with animal-themed powers grappling with the establishment.
I think the difference is that when you pay discord, they stop advertising to you.
A great multiplicity of the English speaking population does not put nearly so much thought into their grammar as does the online commentator.
I love this idea. Unfortunately, I think it’s just a slightly unnatural vocal performance. Even though AI can perfectly replicate voices tonally, they can’t truly generate the same cadence and inflections, or sometimes even get close without a good deal of human assistance. I suspect this will change over time. As with ChatGPT, we’ll be looking to AI to solve the problem of AI mimicking humans too well.
Essentially true but thoroughly reductive. Like saying “live music is all about saying look at me play all these notes”
I think legal semantics might just be beside the point. I believe she knew the possibility was there and accepted it, but the answer she was looking for is “how far does it go” when a person essentially publicly forfeits their rights. Blanket consent, the forfeiture of those rights, they don’t fundamentally change that this is a person.
I have a coworker who is essentially building a custom program in Sheets using AppScript, and has been using CGPT/Gemini the whole way.
While this person has a basic grasp of the fundamentals, there’s a lot of missing information that gets filled in by the bots. Ultimately after enough fiddling, it will spit out usable code that works how it’s supposed to, but honestly it ends up taking significantly longer to guide the bot into making just the right solution for a given problem. Not to mention the code is just a mess - even though it works there’s no real consistency since it’s built across prompts.
I’m confident that in this case and likely in plenty of other cases like it, the amount of time it takes to learn how to ask the bot the right questions in totality would be better spent just reading the documentation for whatever language is being used. At that point it might be worth it to spit out simple code that can be easily debugged.
Ultimately, it just feels like you’re offloading complexity from one layer to the next, and in so doing quickly acquiring tech debt.