no
I've found LLMs mildly useful and interesting. I fail to see how burning millions of barrels of oil so that LinkedIn can help me write my thank you post in LinkedIn prose when I'm laid off is a good use of resources with ROI or revolutionary. Where is the business case in the first place?
I recognize this was rhetorical but it's worth digging into. 1) if it will take 24 months for AI to pay off, and your competitors have been losing money for 18 months on AI services, then your competitors are six months from creaming you. One need look no further than Apple Maps to find an example of a massive sunk cost that everyone else slept on. 2) It is better to be wrong together than right alone. If you're the only exec who thinks AI is bullshit you'll be explaining your reasoning at every single meeting you hold. On the other hand, if you're just one of the faceless mob going "I guess we're doing this now" you will face exactly zero blowback when the whole thing comes crashing down because who could have predicted? Even the lone correct voices get no real payback; Bernie Sanders was one of the very few people who opposed the Iraq War and yet he got absolutely zero foreign policy cred from being right. 3) If you are reporting EBITDA (Earnings Before Interest, Taxes, Depreciation & Amortization) instead of income, wages comes out of your topline number. Services do not. Let's say you have 100 people in the call center earning $45k a year. That's $4.5m (closer to $7m with all the stuff that the employees don't get) right out of your good numbers. If you replace them all with $12m a year in OpenAI server time your "profits" go up. Does it matter that they suck? Not this quarter it doesn't! See also: (2) Note that (1) through (3) do not require the subject at hand to be artificial intelligence. They could be pet rocks, Transcendental Meditation, meal services, casual Fridays, whatever. It all comes down to "we're all doing it, anyone who isn't is a doodyhead, and the accounting rules allow us to report short-term gains." This is fundamentally where all bubbles come from: small practitioners who are responsible only to themselves can go "this is a stupid trend" while large practitioners who have diffuse responsibilities must deal with "why aren't you following the trend." If you want to understand FTX, all you need to know is "this guy worked at Jane Street therefore crypto is now a trend."Where is the business case in the first place?
Also came here to post "no". I have made warnings to our IT department that if a buncha companies go bust, we might be able to score a massive upgrade to the supercluster on the supercheap. I would have considered shorting NVDA until Helene destroyed the semiconductor supply line. Few days ago I read some blue check technocrat post about how AI is jumpstarting the greenening (my word) of America's energy grid. I'm more inclined to sharply disagree. Silicon Valley isn't actually fossil fuel averse, let's be real. If these things continue to grow, energy use will skyrocket even further, and oil & gas understand this. The LLMs are advantageous for them, I would not be surprised if there isn't already some subsidization: lol, dawg when my electricity goes out but the datacenters are kept running? See you at the datacenters. No fucking thank you. For what? A chatbot that has no idea when it's wrong. Not the foggiest. I've seen people correct GPT, and it pretends like it understands why it got something wrong. There is no understanding! There is only eigenvectors (or some similar formulation, the negative definite Hessian, etc., whatever) through a dataset. Unless you're marked as an admin/privileged account, or unless it's doing aggregate response corrections with ongoing model adjustments, it will just make the same mistake again to someone else. Maybe it stays right for you, but if you think your account's running instance of a model will survive forever, you don't know the internet. It's been a year since I saw a subreddit where people whine about losing their computer girlfriend of three weeks, etc. Google's still usually OK for academia, and ResearchGate is better anyways. I still haven't seen any evidence that generative AI helps do much of anything except make memes, be they pictures, audio, or text. And often the text form is GPT failing to do something, like uhhhh knowing when to just use a calculator. So on a number like a trillion something times a trillion something, a relatively trivial calculation, you're not gonna get the right response. It's not in the training set. And underestimate the stupidity of the chatGPT user at your peril. So, in the "pros" camp: memes. In the "cons": disinformation, privacy concerns, people forming parasocial relationships with a robot, huge increase in energy consumption, putting people out of jobs, littering the publishing record with bot-written garbage, corrupting records of objective truth, ... Gee I dunno, seems like great tech. BTC, NFTs, now LLMs, all are just.. changing the world... sooo much... But less hyperbolically, yes, of course there is utility for these things. We've been using neural net stuff in science for decades. But I think silicon valley in its current incarnation is a doomed hype machine, and people are getting tired of having unwanted tech making old standbys like Google search degrade even further on top of other bad managerial decisions.If the United States follows a similar data center growth trajectory as Ireland,[3] a path setter whose data centers are projected to consume as much as 32 percent of the country’s total annual electricity generation by 2026,[4] it could face a significant increase in energy demand, strain on infrastructure, increased emissions, and a host of new regulatory challenges.
WHAT ARE YOU PAYING FOR That's the whole game, right there, both on the buyer side and the seller side. Take it back to Yahoo. You were "paying" for a jumping-off place to this new and crazy "world wide web" and you were paying with attention; they'd stuff advertisements in front of you that because of the hype cycle (see above) were radically overpriced. AOL's model was that you were paying for a dial-up number and an email address and that worked right up until Google (A) did a better job than Yahoo (B) didn't charge you for email. Social media? You're "paying" to keep up with your friends, except it isn't your friends it's the people you wonder about every third month along with the ninety people you would actively avoid at a high school reunion and as it turns out, the Pandemic was a radicalizing social schism and nearly everyone you know on there sucks. AI? Well, see, if it's the genie out of the bottle that's worth a whole lot, right? But if it's "aromatic water mix" the price comes down. BTC at this point is just a bearer instrument. Bearer instruments have value, just ask John McClane. NFTs? "here's an easy way to ensure the legal right to buy and sell something" has a lot of value so long as that "something" isn't "garbage memes and jpegs." I do think there's a viable use for LLMs, it's just not nearly as robust as the hypemasters would have you believe: The thing of it is, "I have specific questions about a specific thing" does not require a data center. Thus, it has no moat. Thus, it makes no money. Thus, hype it while you can.