a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  303 days ago  ·  link  ·    ·  parent  ·  post: The Knowledge Economy Is Over. Welcome to the Allocation Economy

    Filing this under “mental models for understanding how to utilize LLMs”.

For all the wrong reasons.

    Hey! Dan Shipper here. Registration is open for my new course, Maximize Your Mind With ChatGPT.

Fuckin' what the world needs now is EST with buzzwords. Landmark with tech trends. 10/10. No notes.

    Time isn’t as linear as you think. It has ripples and folds like smooth silk. It doubles back on itself, and if you know where to look, you can catch the future shimmering in the present.

Not according to thermodynamics but do go off.

    Last week I wrote about how ChatGPT changed my conception of intelligence and the way I see the world. I’ve started to see ChatGPT as a summarizer of human knowledge, and once I made that connection, I started to see summarizing everywhere: in the code I write (summaries of what’s on StackOverflow), and the emails I send (summaries of meetings I had), and the articles I write (summaries of books I read).

Great. We can agree on that. LLMs take a corpus of knowledge, navigate it ad nauseum, build a black box association LUT and then vomit out datapoints that have been stochastically randomized from regular to extra-spicy. If you want the mean, median and mode of a million color swatches, LLMs will give you a paint chip. Or, run it a dozen times, get a dozen similar paint chips.

    Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do.

Well there's your first mistake. Summarizing is fucking math. What you're talking about is insight and insight is what wins, not "summarizing."

    But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI.

Yup. If you want the mean, median and mode of a large corpus of data without any insights into the data what-so-fucking-ever, ChatGPT is the tool for you. So yeah - if you are basically eliminating insight as a valuable commodity, ChatGPT is fucking amazeballs.

    Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.

Great! You've now put yourself in the position of guessing whether the machine is correct or not without any way to get the machine to show its work.

    As Every’s Evan Armstrong argued several months ago, “AI is an abstraction layer over lower-level thinking.” That lower-level thinking is, largely, summarizing.

No it's fucking PATTERN RECOGNITION.

    If I’m using ChatGPT in this way today, there’s a good chance this behavior—handing off summarizing to AI—is going to become widespread in the future. That could have a significant impact on the economy.

Let's be clear - those of us with insight both long for and dread a world where all you chumps have deprecated insight. Long for it because we're going to wipe the floor with you. Dread because the world will be a fucking dumpster fire.

    But what happens when that very skill—knowing and utilizing the right knowledge at the right time—becomes something that computers can do faster and sometimes just as well as we can?

I'm sorry but when did we go from "summarizing" to "knowing and utilizing" as if they were the same thing? Because they're not even vaguely the same thing. Here's a whole post about ChatGPT not knowing SHIT.

    It means a transition from a knowledge economy to an allocation economy. You won’t be judged on how much you know, but instead on how well you can allocate and manage the resources to get work done.

Does Dan have any employees? 'cuz I judge my employees on whether or not they can do the tasks I've hired them for.

    There’s already a class of people who are engaged in this kind of work every day: managers.

that is not what managers do. Managers coordinate people.

    They need to know things like how to evaluate talent, manage without micromanaging, and estimate how long a project will take.

I don't think this guy has ever met a manager.

    Individual contributors—the people in the rest of the economy, who do the actual work—don't need that skill today.

Or, for that matter, an "individual contributor."

    But in this new economy, the allocation economy, they will. Even junior employees will be expected to use AI, which will force them into the role of manager—model manager.

Fucking lol every employee I have is "expected to use" visual basic, HTML5, VoIP, SSL and Java. They don't know that? And that's fine. What they do is "their jobs" and they do them well. Why the fuck would ChatGPT be any goddamn different?

(500 words of big-think bullshit ommitted)





veen  ·  300 days ago  ·  link  ·  

Dan desperately wants to be an auteur. He's not.

What I like about the metaphor is that it reminded me a lot of how I feel like managing interns at my job. You're gonna need to instruct (prompt) them in a particular way, they're gonna run in whatever direction that seems good to them regardless of if it actually makes sense/is true, and it's up to you to coordinate various people and make sure the right task befalls the right person (model). It's a metaphor on how to use the increasing array of different tools and models and interfaces and whatnot. I feel there's a difference between how I use a normal tool versus how I use AI tools, precisely because they're both unreliable and a way to boost creativity or to outsource easily-controllable tasks. (Lke interns.)

I fully agree that managing people and, you know, their feelings & morale & motivation is what a manager's actual job is, but I find the argument that Dan makes where "we are all gonna be a bit more managerial due to AI tools cropping up in our job in weird ways" at least somewhat compelling.

kleinbl00  ·  300 days ago  ·  link  ·  

Yeah I get it but look - you say "hey Intern give me this data"

and they come back with "here is a banana and a receipt for my Uber ride to buy bananas also I brought coffee for everyone but you please validate my parking"

and you go "well, wait. Hang on. How did bananas and coffee come into the discussion, also this wasn't part of the scope and I recognize that's my fault for not giving you a proper brief"

and the intern looks at you askance, already viewing you in terms of tik tok memes BUT when you ask how bananas entered the chat, you'll at least get some reasoning.

AI gonna tell you bananas are data. And if you say try again you might get bananas, you might get oranges, you might get data. And if you use that data, you might find out later on that it's actually bananas. And if you query it enough you might come across some form of "this data is bananas" which somehow got you into the "desktop fruit" dataset but there's nothing you can do to get fruit out of the data because what you're doing? Is querying a black box that belongs to someone else and they don't really know where the Markov turned left, either and even if they did, there's nothing they can do except hand-tag "do not give bananas in response to data" which is great unless you're a grocery wholesaler.

The reliability of interns is an easily assessed quality. You can back-test any errors. You can tune your input to maximize your output. If you don't like the output you can try your luck and run it again? But any answer you get, you're going to have to try again a different way anyway since "factual correctness" is simply not a vector within the GPT space.