Good ideas and conversation. No ads, no tracking. Login or Take a Tour!
- To track performance, the researchers fed ChatGPT 1,000 different numbers. In March, the premium GPT-4, correctly identified whether 84% of the numbers were prime or not. (Pretty mediocre performance for a computer, frankly.) By June its success rate had dropped to 51%.
There was a time? When Intel ate shit for fucking up the 5th decimal place of long division in hardware.
- The growing dissatisfaction with Intel's response led to the company offering to replace all flawed Pentium processors on request on December 20. On January 17, 1995, Intel announced "a pre-tax charge of $475 million against earnings, ostensibly the total cost associated with replacement of the flawed processors." This is equivalent to $783 million in 2021.
Know how you know a math coprocessor is fux0red? You run math on it.
- The SRT algorithm can generate two bits of the division result per clock cycle, whereas the 486's algorithm could only generate one. It is implemented using a programmable logic array with 2,048 cells, of which 1,066 cells should have been populated with one of five values: −2, −1, 0, 1, 2. When the original array for the Pentium was compiled, five values were not correctly downloaded into the equipment that etches the arrays into the chips – thus five of the array cells contained zero when they should have contained 2.
Know how you know ChatGPT is fux0red? You turn it on.
- In response to questions about the new research, OpenAI said in a written statement: “When we release new model versions, our top priority is to make newer models smarter across the board. We are working hard to ensure that new versions result in improvements across a comprehensive range of tasks. That said, our evaluation methodology isn’t perfect, and we’re constantly improving it.”
The whole of the world's stock markets this year are focused on how shiny the Magic 8-ball is, not whether it can predict the future. Which, fundamentally? means the markets don't care.
- Could the erosion of the ability to solve math problems be an unintended consequence of trying to prevent people from tricking the AI into giving outrageous responses?