LOL Response: ”Strange and solemn, now,“ Princess Leia replied. "Will pirates make jokes at Jabba’s birthday cake? Oh, and for a moment, Darth Vader was too full of himself for that ceremony, hence all the mirth.” “Did your father suffer there?” Vader roared and growled and turned to gaze up from the balcony.“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs.
Prompt: Darth Vader entered the room, cape billowing. “What is the meaning of these… festivities?”
To be fair, as stated on the page you linked, that example was generated on a version of the model trained on only 117 million parameters. That's 7% of the data set that feeds the full version. The full version has not been publicly released. And examples from it are more impressive.
It's just another fucking Markov bot. The beauty of Markov chains is they don't have to understand what the fuck they're chaining - they are generating probabilistic outcomes from noise. That's why they look like speech but are total fucking gibberish: yet we use this shit to hassle citizens at concerts. Everyone loves to lose their fucking minds over the flying-car future of chatbots that amusingly generate garbage as if it's somehow better than lorem ipsum and it's not. It's useful for fooling other machines into thinking they're observing a human. StyleGAN, for example, can't tell the difference between wrinkles and glasses except in the context of "this line continues that line" and "this contrast continues that contrast." It does not have a model for "glasses." It has a model for "face" and can be taught a model for anything else, but that model only looks good when you prune the shit that looks bad: "All models are wrong, but some are useful." The basic problem here is that the wrongness of the model has been underweighted by every breathless tech journalist in search of overweighting its usefulness and every model is fine so long as you aren't reliant on it working. They're running our economy right now, by the way. 65% of the trades in the public markets are Markov bots that think Harriet Tubman was "the first woman to cross the US-Mexico border to escape slavery on foot." Yeah - the keywords are in there. Slavery, border, work. But the specifics are garbage. This was the principle argument Sherry Turkle made in Alone Together: humans will bend over backwards to assign humanity to robots. We so want to pareidolia our way to robot overlords that nine times out of ten we give no fucks if the new god is chickenwire and carpet.