a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
am_Unition  ·  276 days ago  ·  link  ·    ·  parent  ·  post: An ‘education legend’ has created an AI that will change your mind about AI

But it doesn't have to know what it is. You're thinking about a conceptual understanding in the range of artificial general intelligence. The LLM's just gonna tell you something that's probably right, if you prompt it well. I think of current model capabilities like:

It'll probably get "What is 5+7?" right every single time.

If you ask it "I have five apples, and then I get seven more. How many apples do I have?" it'll get it right maybe 99.9% of the time. Most of the 0.1% is probably "seven".

If you ask it "I've got two friends from out of town and three local friends meeting up with my family of seven, how many seats do I reserve a table for?" it'll get it right maybe 99.5% of the time. The 0.5% should be interesting. "Everyone". "One hundred and fifty wedding guests". "Tables near you are mostly made of oak wood. The median price at Lowes and Home Depot is $497.21.

And so on.

You might be able to bully an LLM into nonsense, though. If it wasn't bullying, like a real disagreement with model outputs, maybe you're better off tracking down the source material (which is hopefully part of the output) and seeing what's up if you're at all unsure. In either case, you have no way to really affect the source training model material unless the owner lets you in somehow, like by ingesting the conversation back into the LLM with a super high index of certainty or something. Otherwise, as soon as you close the window you bullied it in, the thing "knows" 5+7=12 again, i.e. it'll spit out the right answer almost always.

It could be good for virtual learning, but the mistakes could be devastating. Hey, it's just like regular learning.