One specific example that is worth paying attention to is that of scientific progress, because it is conceptually very close to intelligence itself — science, as a problem-solving system, is very close to being a runaway superhuman AI. Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science — whether lab hardware (e.g. quantum physics led to lasers, which enabled a wealth of new quantum physics experiments), conceptual tools (e.g. a new theorem, a new theory), cognitive tools (e.g. mathematical notation), software tools, communications protocols that enable scientists to better collaborate (e.g. the Internet)…
Yet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled “The Singularity is not coming”. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. And this is despite us investing exponential efforts into science — the headcount of researchers doubles roughly once every 15 to 20 years, and these researchers are using exponentially faster computers to improve their productivity.
How comes? What bottlenecks and adversarial counter-reactions are slowing down recursive self-improvement in science? So many, I can’t even count them. Here are a few. Importantly, every single one of them would also apply to recursively self-improving AIs.
- Doing science in a given field gets exponentially harder over time — the founders of the field reap most the low-hanging fruit, and achieving comparable impact later requires exponentially more effort. No researcher will ever achieve comparable progress in information theory as Shannon did in his 1948 paper.
- Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. It gets increasingly harder to keep up with the firehose of new publications. Remember that a network with N nodes has N * (N - 1) / 2 edges.
- As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.
Francois Chollet is primary author of the well-known "Keras" framework for deep learning in python. He is also the creator of Wysp, a platform for creatives to learn, create, and share art. He currently works as an engineer/researcher at Google.
Intelligence has not been clearly defined by researchers. The Singularity is a projection of overly-intellectual scientists who fetishize optimization and believe intelligence to be the end all be all of problem-solving and human relationships while dreaming of an era where computers run everything, creating an orderly, rational world. The Singularity is Heaven for nerds and their god is a silicon-based ineffability.
"The Singularity and the noosphere, the idea that a collective consciousness emerges from all the users on the web, echo Marxist social determinism and Freud's calculus of perversions. We rush ahead of skeptical, scientific inquiry at our peril, just like the Marxists and Freudians.” Jaron LanierThe Singularity is Heaven for nerds and their god is a silicon-based ineffability.
It's an interesting thought, although I expect that a more general kind of Singularity may happen still (if you define it as a period that is unrecognizable). I'm reminded of a quote I read on this a couple years ago, that I am paraphrasing (and the author of which I can't remember):Thinking we'll make computers think by making them faster is like thinking that if we make buildings tall enough, we can teach them to fly.