a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
kleinbl00  ·  272 days ago  ·  link  ·    ·  parent  ·  post: OpenAI's Sora

I had a discussion with an old buddy about LLMs yesterday. He's writing fiction and is using ChatGPT like a rented mule.

He's got a character who's modeled on Andrew Tate but he wants him to be annoying, not a villain, so he'll type "give me ten things a sexist asshole would say about women that aren't awful." He's got a character who's a vampire so he'll type "give me a list of insults a vampire would use against townsfolk." Or he'll be analyzing plot points and he'll say "give me a list of movie scenes that would radically change the movie if they were absent."

In each one he goes through and picks what he likes. In the last one he argues with it. I pointed out that he's basically using ChatGPT like an extended thesaurus and he agreed. I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant. But - I pointed out - when you ask it for an opinion it will fall down every time because it has absolutely no handles on any of its inputs and outputs. You can't ask it to tell you what scenes are crucial because it has no understanding of any of the concepts underneath. What it has is a diet of forum posts that it will never give you straight.

Shall we play "how can chatGPT do my job?" 'cuz they've been trying to AI automate my job forever.

See this guy? they were about $1500 back in '94. And what they do is analyze the audio signal passing through them looking for feedback, and then they drop one of eight filters on it. You can adjust the sensitivity to feedback, you can adjust the latch, you can adjust the release, you can adjust the aggressiveness. They were really big until about 2005 or so when it became cheap and easy to TEF sweep a room and ring it out to EQ out the frequencies that cause things to ring - I'm sitting here surrounded by ten speakers at 85dB and having spent an afternoon mapping and collating and inserting between 4 and 15 filters each channel I can't get feedback if I hold a condenser in front of left main.

Could an AI have done that? fuck yeah. That would have been delightful. But not without me moving the mic sixty times so what time am I actually saving?

That active seeking feedback reduciton thing has made it into machine tools - each servopak on my mill has more filters than that Sabine. And in general, the approach everyone takes is "set as many as you need to kill steady-state, use the roaming ones carefully" because who knows what modes you'll run into with this or that chunk of aluminum strapped down getting chewed up.

Everything I've got is already a waveform. We've been using Fourier transforms to operate on them for 40 years. My life is nothing but math. And despite the fact that GraceNote has literally released every song they know about as training data, telling the AI "make my mix sound better" still fucking failwhales. Like, on a basic, simple level. It understands what the sonogram of a song should sound like but that's like reconstructing a fetus from an ultrasound. What you get is uncanny valley nightmare fuel.

I don't need the mediocre middle of a million mixes, I need excellence. And excellence comes from humans because it is, by definition, not the mean. Anyone expecting that a machine purpose-built to give you a statistical average can give you only the good outliers is going to be disappointed for the simple fact that the machine doesn't understand "good" or "bad" it understands "highly rated" or "much engaged with." The machine thinks this is the best Jurassic Park cover ever made:

And the only way you can deal with that is to nerf it out on a case-by-case basis.

You could argue that LLMs are good for facts but not opinions but the problem is its method for handling facts only works for opinions. Are they useful? Yes. Are they a tool that will make big changes to a few industries? I don't see how they can't. Am I honestly excited to see their actual utility? You damn betcha. But where the world is now is this:

People who don't understand AI inflicting it on people who don't need AI to the detriment of people who don't want AI.

That's it. That's the game.