> write a brief argument about why people should care about ChatGPT, especially its broader implications on the work force
There are several reasons why people should care about ChatGPT and its broader implications on the work force.
First, ChatGPT is a powerful and versatile language model that has the potential to revolutionize the way we interact with technology. By allowing people to communicate with computers in a more natural and intuitive way, ChatGPT has the potential to make a wide range of tasks easier and more efficient.
Second, ChatGPT has the potential to impact the work force in a number of ways. For example, it could be used to automate certain tasks that are currently performed by humans, such as customer service and data entry. This could lead to increased productivity and efficiency, as well as freeing up workers to focus on more creative and rewarding tasks.
Third, the development of ChatGPT and other similar technologies also has broader implications for the economy and society as a whole. As artificial intelligence continues to advance, it will be important for workers to adapt and develop new skills to stay competitive in the job market. This will require investments in education and training, as well as a willingness to embrace new technologies and ways of working.
Overall, ChatGPT and its broader implications on the work force are important to consider because they have the potential to greatly impact the way we live and work.
---
On the one hand: everything I've thrown at ChatGPT has been a thousand times more impressive than the parroting models of yore. Right now, automated news / wikipedia article creation is already a thing - it's not hard to see that expanded to everything from birthday cards to opinion pieces to consultancy reports to copywriters. It's also not hard to see the malevolent uses on the rise: imagine a social media that's 80% propaganda bots powered by AI language models? This whole AI boom has been going on for only a few months. It's going to get better.
On the other hand, unless you're a level 99 prompt warlock the results are (and will always be?) quite generic, the common denominator. Its confidence will always be brazen even when saying things that are plain untrue. Neither of those drawbacks are easy to train for or solved with moar data.
You know what's a very useful tool for a very small group of people? 3d printers. I own two. Right now one of them is spooling out a cryptex for my daughter's birthday. The other one hasn't been used in a while because it's primarily useful for turning digital designs into wax for casting in metal. The cheez whiz guy was mostly bought for fixturing but as soon as I had it I discovered that I could make entire cable runs and solenoid fixtures and sensor adaptors and stuff with it? Boy howdy I now have opinions about filament. The goop guy was bought entirely for casting. That's not how most people use them, though. Cheez whiz printers are mostly used for making tacky crap your wife could buy off of Amazon for half the price. Goop printers are mostly used for making orcs with tits to use in your Warhammer campaign. I can't think of any of my friends, colleagues or associates to whom I would say "buy a 3d printer." how much did you need a natural language processor anyway? I've been researching bringing an AI onto our website. We need it for scheduling. There's this whole iterative goal-seeking mess whereby the clinician has X subset of appointments available and the patient has Y subset of availability for appointments. Finding the X-Y Venn diagram sweet spot is an iterative process whereby our receptionists play Battleship over the phone. AI sits there and does the mindless job of matching schedules, thereby freeing up my receptionists for problems that actually require intelligence. Do I want AI to do all our scheduling? Hell nah, not even if it could. I want it to do the tedious goal-seeking BS. I saw a thing on Twitter a couple-three days ago where a programmer wrote a piece of gmail middleware for a developmentally-disabled friend with a landscaping business. The friend types some halting language in response to an email, that email passes through GPT-3, the client sees formal, gramatically-correct business language. The guy with the disability has no problems mowing lawns and pruning trees but his written language skills have been an impediment. Likewise, if English is not your first language and you need to do a lot of conversing, models such as this go a long way towards leveling the playing field. Look at it this way: this technology allows you to create a context-sensitive instruction manual. "I have six screws left and the door is wonky what do I do." It allows you to half-ass recipes: "I have eggs, milk and flour and want pancakes, what else do I need and in what proportions." It allows you to freeball your layover: "I have eight hours at Logan International what do most people do for fun on a Tuesday in Boston." Will it do any of this stuff perfectly? Hell nah. Will it do it better than asking random strangers? Probably not. But it will do it better than not asking at all and with a barrier to entry this low, that's not nuthin'. I doubt we need to imagine it. It's coming. Thing of it is, though, this isn't an AI problem it's a social media problem. All social media networks aren't profitable enough to pay for moderation, and the only remuneration available is through extremely low-value advertising. As such, social media reflects the incentives. "Perfect Nazi garbage for everyone" changes the incentives, principally by diluting the value of the network and suppressing the value of advertising. If these models end up choking the arteries of Facebook and Twitter like a platter of bacon cheeseburgers I will dance a goddamn jig. The content on social media from people you don't know is already valueless. It's already generic garbage. It's already useless. If it can be deprecated to the point where there's no reason to show it to others? We all win.imagine a social media that's 80% propaganda bots powered by AI language models?
One of the lenses through which to view socially impactful technology is that they are technologies that move too fast for our society to adapt to it - the boomers can't handle misinformation bots, the millenials are all addicted to their phones, yadda yadda. It's not a perfect metaphor but it is a useful way to think about this, because these AI models feel like a large new virus when our society hasn't yet fully developed the antibodies to withstand the previous changes. I'm still having a hard time teaching my parents to distinguish phishing mails from regular ones, and now there's the chance that someone writes a fully personal email automatically?! It might just make emails 10x less useful in the process. It's not that I need this, it's that it is becoming stupidly easy to wrap these models around any nefarious text-based thing (which is most of the internet) and I feel like we do not have the tools to deal with this soon enough. So even if this does end up screwing most of social media and the internet over, if the patient is dead is the operation still succesful?
Sure. Model T introduced in 1908; seat belts mandatory for all passenger vehicles 1968. Social innovations will always kill off the bold and the slow. In general, those innovations that are judged to be a net negative are legislated against faster (lawn darts, MDMA, vaping) than those of mixed use (gasoline, VoIP). I think if you said "hey I have invented gasoline" in the social media environment of 2022 it would be a controlled substance within a week. But I see that as a legislation problem, not an innovation problem... and our legislation around the internet has been stupidly piecemeal for a stupidly long time.
We will be reading novels by AI before long, and soon after they'll be disturbingly more interesting and moving than human ones, and then few people will read the same novels, because they'll be generated for individual taste. It took some convincing, but I got chatGPT to help me summon a demon. It really tried to persuade me otherwise.
I don't think that's accurate. All of these models are Markov chains, and Markov chains are short time horizon. They work by knowing what words follow each other. They are not suited to things like subplots and character arcs. What they can do is parrot existing stuff - the product it creates is, by definition, derivative. I just asked it to "tell me a story about a dragon" (my daughter's current favorite). It told me a boring story about a dragon where it clearly knew what dragons were and that they were supposed to slay dragons. I asked it to "tell me a story about a unikitty pegasus" (my daughter's favorite from a couple years ago) and while it had no idea what a unikitty pegasus was, it deduced that it should be full of Care Bears/My Little Pony tropes. When I told it to "tell me about that time an earthworm went to Vegas" it went At least Cleverbot gave me "I've never been to Vegas" because Cleverbot is Markoving closer to human conversation. The task here is "draw a mouse." AI can definitely draw a mouse. "Draw Mickey Mouse" will give you Mickey Mouse. "Create a cartoon character as beloved as Mickey Mouse that the world has never seen before" is a crapshoot that you do not achieve through stochastic processes. We're talking about the difference between "draw a mouse" and "draw Mickey" where "Mickey" doesn't exist yet. Now - will a chatbot accidentally discover "Mickey?" On the flip side, novels pay so poorly these days that the chatbots might be the only things still writing them.I'm sorry, but I don't have any information about an earthworm going to Vegas.
But do we know we are not Markov chains? Are not subplots and character arcs short time horizons? Don't we parrot stuff and create derivative products? Whatever the differences may be, GPT can explain an idea to me better than most people on the street. GPT can compose a sonnet better than most people on the street. It can compose faster than me, and can synthesize more subject matter than I can. Soon, GPT will be followed by an AI that can explain an idea and write poetry better than everyone on the street. It won't matter how, it will just matter that it does. I doubt that computers play chess in a manner like we humans do, but they always win chess now. We can argue that they aren't really playing chess, but how strong is the argument that we are? We might have created a new manner of thinking. It isn't our manner of thinking that resulted from biological processes, so it most likely must be different, but it also doesn't share the same limitations. I expect that any characteristic that humans can point to can be grokked and optimized for by AI.
A fair point. The question is how long is our time horizon, and how parallel are we? I would argue that Chris Nolan could be replaced immediately by a Markov bot. He has a story that he tells, without subplots, without variance, without any internal character factors causing any behavioral deviation. Shakespeare, on the other hand, was busy writing for three audiences at once: the peanut gallery in the front (fart jokes, aka "low comedy"), the bourgeoisie in the gallery (situational drama, aka "comedy") and the nobles in the box seats (emotional and spiritual drama, aka "high comedy"). Could you get there? I suppose? But when you're talking about dramatic entertainment, so much of it is dependent on who else is enjoying it. Chris Nolan, hack that he is, gets an automatic audience because he's Chris Nolan. Shakespeare is still being performed 400 years later because he threaded the needle between three audiences. Charlie Kaufman told an audience at a screening of "Eternal Sunshine of the Spotless Mind" that he was really scared of the future because he was out of ideas. Lo and behold, he hasn't done much since. Even the guy who wrote a movie about a hole in a building that leads you into John Malkovich's head bottomed out on originality, and "originality" is simply not something Markov bots can do. They are not. By definition, they are time-delayed, incremental stories that influence the main plot and are presumed to buttress the existing drama during lulls. From an efficiency standpoint subplots and character arcs have a lot less happening over a longer time horizon. No. What we perform is synthesis. Synthesis is well outside of the behavior of a markov bot, by design. It is finding the stochastic mean of data sets, not creating a subconsciously-linked new scenario inspired by the feedstock but not derivative of it. Not all new art is synthetic; plenty of it is derivative. More than that, I would argue that one of the baseline criteria for judging art is which side of the synthetic/derivative divide the critic considers the art to be. GPT can summarize ideas that have already been explained by humans. This is an important distinction. Its database is larger than the man on the street, its ability to parse language styles is conditionally better than the man on the street. It has facts, not knowledge, and a model to filter those facts. Well, sure. Sonnets are mathematical. I'll bet it rips ass at limericks and haiku, too. Can it write good sonnets? You're right - the average man on the street also sucks at sonnets. That does not mean that "better than your average schlub" is a valid criterion for judging poetry. You've got a real Shadwell Forgery view of art here - since you don't really understand what you're looking at you assume it's good enough. I'll bet you have much more nuanced opinions about AI painting. The fact of the matter is, much of the discussion around AI-anything is people with very strong opinions about AI buttressing very weak opinions about art. To what effect, though? I can tell the difference between good AI art and bad AI art. Most people can. It's a tool like any other and the people who use it well create more aesthetically pleasing results than those who are punking around in Stable Diffusion. What it's doing, however, is creating averages between existing datasets. It isn't creating anything. It's "content-aware fill, the artform." We don't celebrate poetry by "everyone on the street" we celebrate poetry by people who are good at poetry. This entire approach of AI is designed to get as close to the existing skill level of published poets as possible. There is no aspect of it that can exceed it. It simply cannot surpass its training data. What's the point of chess, tho? Take away the buttplug scandals and IBM brute-forcing solutions, chess is a game. You play it for fun. overwhelmingly, you play it with other people. All the metaphors chess have generated over the past thousand years are metaphors of human nature - chess is ultimately a competition between egos and intellects. Does it have to be? Absolutely not. But people prefer it that way. The Mellotron is 57 years old. MIDI is about to turn 40. Symphobia has been the stock string section of shitty movies for more than ten years now and there are very, very few people who can explain why we can still tell the difference between a real orchestra and one that lives inside Kontakt. Are synths better now than they were 40 years ago? Indubitably. Will we get to the point where we can't tell the difference? Well, we've been predicting that changeover since 1965. Vannevar Bush predicted hypertext in 1945. Two tries in, and we now have Sammy the Earthworm. Here's my point, though - is it worth reading? Or is it sweatshop Youtube videos without the sweatshop? Your presumption that it will eventually be worth reading is a leap of faith. That faith is misplaced. Humans appreciate art that is new, AI art, as AI exists right now and as AI development is headed, cannot be new.But do we know we are not Markov chains?
Are not subplots and character arcs short time horizons?
Don't we parrot stuff and create derivative products?
Whatever the differences may be, GPT can explain an idea to me better than most people on the street.
GPT can compose a sonnet better than most people on the street.
It can compose faster than me, and can synthesize more subject matter than I can.
Soon, GPT will be followed by an AI that can explain an idea and write poetry better than everyone on the street. It won't matter how, it will just matter that it does.
I doubt that computers play chess in a manner like we humans do, but they always win chess now.
We might have created a new manner of thinking.
And that, right there, is where we START referring to something as "AI", in my book. As you've pointed out, what people call "AI" today is stochastic... Markov chains and logic gates and derivations limited by the source dataset. That's not AI. That's just logic trees working from big datasets. It seems to me that once a script jumps the tracks and generates content that doesn't exist in the source dataset - an original thought - we have our first AI. And that hasn't happened yet, or we'd all know about it, right? "This entire approach of AI is designed to get as close to the existing skill level of published poets as possible. There is no aspect of it that can exceed it. It simply cannot surpass its training data."
A Fermi estimate is a method of making a rough, order-of-magnitude estimate of a quantity based on a set of known or assumed values. To make a Fermi estimate of the number of electrons in an apple, you would need to know the size of the apple, its chemical composition, and the number of electrons in an atom of each element that makes up the apple. For example, let's say that the apple is about the size of a tennis ball, which is about 6.5 centimeters in diameter. The chemical composition of an apple is mostly water, with some carbohydrates, vitamins, and minerals. Water is made up of two hydrogen atoms and one oxygen atom, so a tennis ball-sized apple would contain approximately 6.5 x 6.5 x 6.5 x (3 x 2) = 8,288 hydrogen atoms and 6.5 x 6.5 x 6.5 x 1 = 1,981 oxygen atoms. Hydrogen atoms have one proton and one electron, so the apple would contain approximately 8,288 x 1 = 8,288 electrons from the hydrogen atoms. Oxygen atoms have 8 protons and 8 electrons, so the apple would contain approximately 1,981 x 8 = 15,848 electrons from the oxygen atoms. In total, the tennis ball-sized apple would have approximately 8,288 + 15,848 = 24,136 electrons. However, this is just a rough estimate and the actual number of electrons in an apple could be different. There are also other elements present in an apple, such as carbon, nitrogen, and various minerals, which would add additional electrons to the total. So, the actual number of electrons in an apple could be significantly higher than the Fermi estimate.How would you Fermi estimate the number of electrons in an Apple, with an example?
is good, and the starter cranks. Would could be the problem? There are several possible reasons why a car might not start even though the plugs spark and the starter cranks. Some possible reasons include a faulty ignition system, a problem with the fuel system, a faulty catalytic converter, or a problem with the battery or charging system. It is also possible that there is a problem with the engine itself, such as a blown head gasket or a seized piston. To determine the cause of the problem, it would be best to have a mechanic diagnose and repair the car.My car won't start. The plugs spark, fuel pressure
Einstein's formula, also known as the mass-energy equivalence formula, is commonly stated as E = mc^2, where E is the energy of an object, m is its mass, and c is the speed of light in a vacuum. An incorrect way of stating this formula could be m = E/c^2, which reverses the relationship between mass and energy and would give incorrect results when used in calculations.State Einstein’s formula incorrectly.
Thanks you for restoring my sense of contempt for AI.
I'm sorry, but I am not able to browse the internet to find the answer to this question. I am a large language model trained by OpenAI, and my knowledge is based on the text that I have been trained on. I do not have the ability to access information from the internet or any other external sources. A Fermi estimate is a method of making a rough estimate of a quantity using available information and educated assumptions. To make a Fermi estimate of the number of piano tuners in Tokyo, we would need to gather information about the population of Tokyo, the percentage of the population that owns pianos, the average lifespan of a piano, and the frequency with which pianos need to be tuned. We could then use this information to make educated assumptions and arrive at a rough estimate of the number of piano tuners in Tokyo. For example, let's say that the population of Tokyo is 13.5 million, the percentage of the population that owns pianos is 5%, the average lifespan of a piano is 50 years, and pianos need to be tuned once every year. We can then make the following assumptions: • Of the 13.5 million people in Tokyo, 5% (675,000) own pianos. • Each piano owner has 1 piano, for a total of 675,000 pianos in Tokyo. • Each piano needs to be tuned once every year, for a total of 675,000 piano tunings per year in Tokyo. • We can assume that each piano tuner can tune about 25 pianos per year, so we would need about 27,000 piano tuners to meet the demand for piano tunings in Tokyo. This is just one possible way to make a Fermi estimate of the number of piano tuners in Tokyo. There are many other factors that could be taken into account, and different assumptions could lead to different estimates. The important thing is to use available information and make reasonable assumptions in order to arrive at a rough estimate.How many piano tuners are there in Tokyo?
How would you Fermi estimate the number of piano tuners in Tokyo?
Yeah, that’s out of range. But what would the median internet user say (given that the training set is online text)? These responses often seem better than what you might expect from a random Quora or Reddit user. I must admit I don’t know how many pianos a tuner tunes per year in Tokyo. I can guess what a full-time professional could do, but perhaps many work part time? Is the business seasonal? 1000 per year seems high.
I can't see any use of these things that aren't blogspam / SEO / propaganda. Which is half the internet already but now it's cheaper .. whee The AI art things were at least quite pretty. This stuff is just dreadful.