a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  720 days ago  ·  link  ·    ·  parent  ·  post: I feel like we need to talk about AI language models

I don't think that's accurate. All of these models are Markov chains, and Markov chains are short time horizon. They work by knowing what words follow each other. They are not suited to things like subplots and character arcs. What they can do is parrot existing stuff - the product it creates is, by definition, derivative. I just asked it to "tell me a story about a dragon" (my daughter's current favorite). It told me a boring story about a dragon where it clearly knew what dragons were and that they were supposed to slay dragons. I asked it to "tell me a story about a unikitty pegasus" (my daughter's favorite from a couple years ago) and while it had no idea what a unikitty pegasus was, it deduced that it should be full of Care Bears/My Little Pony tropes. When I told it to "tell me about that time an earthworm went to Vegas" it went

    I'm sorry, but I don't have any information about an earthworm going to Vegas.

At least Cleverbot gave me "I've never been to Vegas" because Cleverbot is Markoving closer to human conversation.

The task here is "draw a mouse." AI can definitely draw a mouse. "Draw Mickey Mouse" will give you Mickey Mouse. "Create a cartoon character as beloved as Mickey Mouse that the world has never seen before" is a crapshoot that you do not achieve through stochastic processes. We're talking about the difference between "draw a mouse" and "draw Mickey" where "Mickey" doesn't exist yet. Now - will a chatbot accidentally discover "Mickey?"

On the flip side, novels pay so poorly these days that the chatbots might be the only things still writing them.





mk  ·  719 days ago  ·  link  ·  

But do we know we are not Markov chains? Are not subplots and character arcs short time horizons? Don't we parrot stuff and create derivative products?

Whatever the differences may be, GPT can explain an idea to me better than most people on the street. GPT can compose a sonnet better than most people on the street. It can compose faster than me, and can synthesize more subject matter than I can.

Soon, GPT will be followed by an AI that can explain an idea and write poetry better than everyone on the street. It won't matter how, it will just matter that it does. I doubt that computers play chess in a manner like we humans do, but they always win chess now. We can argue that they aren't really playing chess, but how strong is the argument that we are?

We might have created a new manner of thinking. It isn't our manner of thinking that resulted from biological processes, so it most likely must be different, but it also doesn't share the same limitations. I expect that any characteristic that humans can point to can be grokked and optimized for by AI.

kleinbl00  ·  719 days ago  ·  link  ·  

    But do we know we are not Markov chains?

A fair point. The question is how long is our time horizon, and how parallel are we?

I would argue that Chris Nolan could be replaced immediately by a Markov bot. He has a story that he tells, without subplots, without variance, without any internal character factors causing any behavioral deviation. Shakespeare, on the other hand, was busy writing for three audiences at once: the peanut gallery in the front (fart jokes, aka "low comedy"), the bourgeoisie in the gallery (situational drama, aka "comedy") and the nobles in the box seats (emotional and spiritual drama, aka "high comedy"). Could you get there? I suppose?

But when you're talking about dramatic entertainment, so much of it is dependent on who else is enjoying it. Chris Nolan, hack that he is, gets an automatic audience because he's Chris Nolan. Shakespeare is still being performed 400 years later because he threaded the needle between three audiences. Charlie Kaufman told an audience at a screening of "Eternal Sunshine of the Spotless Mind" that he was really scared of the future because he was out of ideas. Lo and behold, he hasn't done much since. Even the guy who wrote a movie about a hole in a building that leads you into John Malkovich's head bottomed out on originality, and "originality" is simply not something Markov bots can do.

    Are not subplots and character arcs short time horizons?

They are not. By definition, they are time-delayed, incremental stories that influence the main plot and are presumed to buttress the existing drama during lulls. From an efficiency standpoint subplots and character arcs have a lot less happening over a longer time horizon.

    Don't we parrot stuff and create derivative products?

No. What we perform is synthesis. Synthesis is well outside of the behavior of a markov bot, by design. It is finding the stochastic mean of data sets, not creating a subconsciously-linked new scenario inspired by the feedstock but not derivative of it. Not all new art is synthetic; plenty of it is derivative. More than that, I would argue that one of the baseline criteria for judging art is which side of the synthetic/derivative divide the critic considers the art to be.

    Whatever the differences may be, GPT can explain an idea to me better than most people on the street.

GPT can summarize ideas that have already been explained by humans. This is an important distinction. Its database is larger than the man on the street, its ability to parse language styles is conditionally better than the man on the street. It has facts, not knowledge, and a model to filter those facts.

    GPT can compose a sonnet better than most people on the street.

Well, sure. Sonnets are mathematical. I'll bet it rips ass at limericks and haiku, too. Can it write good sonnets? You're right - the average man on the street also sucks at sonnets. That does not mean that "better than your average schlub" is a valid criterion for judging poetry.

You've got a real Shadwell Forgery view of art here - since you don't really understand what you're looking at you assume it's good enough. I'll bet you have much more nuanced opinions about AI painting. The fact of the matter is, much of the discussion around AI-anything is people with very strong opinions about AI buttressing very weak opinions about art.

    It can compose faster than me, and can synthesize more subject matter than I can.

To what effect, though? I can tell the difference between good AI art and bad AI art. Most people can. It's a tool like any other and the people who use it well create more aesthetically pleasing results than those who are punking around in Stable Diffusion. What it's doing, however, is creating averages between existing datasets. It isn't creating anything. It's "content-aware fill, the artform."

    Soon, GPT will be followed by an AI that can explain an idea and write poetry better than everyone on the street. It won't matter how, it will just matter that it does.

We don't celebrate poetry by "everyone on the street" we celebrate poetry by people who are good at poetry. This entire approach of AI is designed to get as close to the existing skill level of published poets as possible. There is no aspect of it that can exceed it. It simply cannot surpass its training data.

    I doubt that computers play chess in a manner like we humans do, but they always win chess now.

What's the point of chess, tho? Take away the buttplug scandals and IBM brute-forcing solutions, chess is a game. You play it for fun. overwhelmingly, you play it with other people. All the metaphors chess have generated over the past thousand years are metaphors of human nature - chess is ultimately a competition between egos and intellects. Does it have to be? Absolutely not. But people prefer it that way.

The Mellotron is 57 years old. MIDI is about to turn 40. Symphobia has been the stock string section of shitty movies for more than ten years now and there are very, very few people who can explain why we can still tell the difference between a real orchestra and one that lives inside Kontakt. Are synths better now than they were 40 years ago? Indubitably. Will we get to the point where we can't tell the difference? Well, we've been predicting that changeover since 1965.

    We might have created a new manner of thinking.

Vannevar Bush predicted hypertext in 1945.

Two tries in, and we now have Sammy the Earthworm. Here's my point, though - is it worth reading? Or is it sweatshop Youtube videos without the sweatshop?

Your presumption that it will eventually be worth reading is a leap of faith. That faith is misplaced. Humans appreciate art that is new, AI art, as AI exists right now and as AI development is headed, cannot be new.

goobster  ·  718 days ago  ·  link  ·  

    "This entire approach of AI is designed to get as close to the existing skill level of published poets as possible. There is no aspect of it that can exceed it. It simply cannot surpass its training data."

And that, right there, is where we START referring to something as "AI", in my book.

As you've pointed out, what people call "AI" today is stochastic... Markov chains and logic gates and derivations limited by the source dataset. That's not AI. That's just logic trees working from big datasets.

It seems to me that once a script jumps the tracks and generates content that doesn't exist in the source dataset - an original thought - we have our first AI. And that hasn't happened yet, or we'd all know about it, right?