Compare Llama 1 to the current state of the art local AI’s. They’re on a completely different level.
Compare Llama 1 to the current state of the art local AI’s. They’re on a completely different level.
Actually good but he still writes like a psycho
I think this is actually happening
Fuck no, that’d make the whole country red. We’re in this together
2016 was easily dismissed because trump was a surprise candidate they weren’t prepared to deal with, Hilary was disliked, and she still won the popular vote. None of those excuses apply in 2024
At my last WFH job my daily setup was firefox, sublime text, slack (electron app), github desktop (also electron), and 3 terminals, one running a local dev server. It all ran fine.
Am i the only one who still has no problems with 8GB? Not that I wouldn’t be happy with more but i can’t remember the last time I’ve even thought about ram usage
“AI, how do I do <obscure thing> in <complex programming framework>”
“Here is some <language> code. Please fix any errors: <paste code here>”
These save me hours of work on a regular basis and I don’t even use the paid tier of ChatGPT for it. Especially the first one because I used to read half the documentation to answer that question. Results are accurate 80% of the time, and the other 20% is close enough that I can fix it in a few minutes. I’m not in an obscure AI related field, any programmer can benefit from stuff like this.