eh... he started out kinda right but ends up kinda wrong ... keep in mind this is the guy who wrote the book "Manufacturing Consent" and then said Canadians who refuse to take the least tested preventive drug injection in modern regulatory history shouldn't be allowed to buy food in grocery stores.
These machine learning models are just trying to predict the next block of words that match its training, which is honestly 2/3rds of the human population. Even Jordan Peterson (when he's not too busy crying on camera) talked about how very few human beings truly have any real creative capacity.
Humans train themselves on a vast corpus of existing information, fed to them by the priests and kings, and later books and later TV and The Google and The Facebook.
The Large Language Models are interesting because they're mostly wrong about any factual questions. They bullshit. They "make things up" in human terms, but in algorithmic terms they can't "bullshit" or "make things up" or even "plagiarize" .. this is just a further anthropomorphization of a pure mathematical machine.
AI models are good at "art" or stock photos, which I think will be a good thing for small creators. Anyone who's tried to use them for anything complex like advanced programming questions (not something simple like read a file or sort a list), will find that they only give you 100% wrong answers all the time. I'm reminded of the lawyer who submitted a brief using ChatGPT and it contained case law that literally never existed.