• 0 Posts
  • 9 Comments
Joined 8 months ago
cake
Cake day: March 17th, 2024

help-circle

  • It’s not even generative

    It doesn’t need to be generative to be AI.

    It’s a scraper that uses already available information to then “learn”.

    That’s just every single “AI” product out there, that’s how they work: They scrape data from all over the internet, create a model that makes predictions based on that data. Chat-GPT doesn’t understand anything. It is simply a really complicated model which predicts what word is most likely to follow a given sequence of words. These “AI” aren’t inteligent, nor are they creative. They, by their very nature, stay as close as possible to the data they are given and never deviate, as a deviation would mean inaccuracy.

    From an historical point of view, the word “AI” simply means “cool new technology”. That’s what it has been used to describe, while people think that AI means “artificial person”, like we see in the movies. So we need to be careful while using this word, because it can mean so many thing to the point that it has little meaning.




  • Philosophy is fine and all but we can’t forget that from a practical standpoint, all this philosophizing is useless. We can’t live our day to day lives operating under the belief that the material world doesn’t exist and using The Problem Of Knowledge as a way to dismiss empirical evidence by stating that we can’t be sure if the material world even exist is impractical and useless. Remember: Philosophy is completely useless. The only value you will find in it is the development of critical thinking skills.

    Just imagine if a murdered caught red handed could get away scoot free by just saying “Hey, you can prove the material world exist, therefore you can prove the victim ever existed!”


  • It’s the problem of knowledge all over again. Something which philosophers have been debating for centuries. But I highly doubt you have studied any of it.

    That whole thing of “facts are just opinions” is nothing more than the devaluing of empirical evidence and turning observable facts into a matter of opinion, turning any and all political discussion into a shouting match where nothing ever comes of it because “it’s just my opinion”. This propaganda tactic is called “The Fire hose of Falsehood”.

    I could go on and on about the nature of knowledge and the evolution of science, but I highly doubt you would care as you do not seem to know even the most basic things about The Problem of Knowledge and choose to go the self-contradictory skeptic route of “Knowledge doesn’t exist”.

    Edit: I would just like to add that just because our sense are 100% reliable that doesn’t mean that everything is false.



  • I don’t think you understand exactly how theses machines work. The machine does not “learn”, it does not extract meaning from the tokens it receives. Here is one way to look at it

    Suppose you have a sequence of symbols: ¹§ŋ¹§ŋ¹§ŋ¹§ŋ And then were given a fragment of a sequence and asked to guess what you be the most likely symbol to follow it: ¹§ Think you could do it? I’m sure you would have no trouble solving this example. But could you make a machine that could reliably accomplish this task, regardless of the sequence of symbols and regardless of the fragment given? Let’s imagine you did manage to create such a marvellous machine.

    If given a large sequence of symbols spanning multiple books of length would you say this pattern recognition machine is able to create anything original? No… Because it is simply trying to copy it’s original sequence as closely as possible.

    Another question: Would this machine ever derive meaning from this symbols? No… How could it?

    But what if I told you that these symbols weren’t just symbols: Unbeknownst to the machine each one of this symbols actually represents a word. Behold: ChatGPT.

    This is basically the general idea behind generative AI as far as I’m aware. Please correct me if I’m wrong. This is obviously oversimplified.