I’m not sure how much I really believe in superintelligence, at least in the way some people do. To be clear I do think an AI can be smarter than humans across all domains, and I think this is likely to be achieved within the next couple decades. But when I hear some folks discuss superintelligence, it seems like they expect it to be able to do just about anything. I think intelligence alone is limited by the fact that the world is really complex. To quote Ege Erdil:
reality turns out to be a lot more complex and a lot more detailed and non-trivial than you expect
In many domains, I think this complexity necessitates actually testing and refining things in the physical world.
Some Things I Think ASI Can Do
Some domains are relatively simple. Chess is a good example of a simple domain, which is probably why it was one of the first areas where AI exceeded human capability. There are only 32 pieces, and they operate and interact according to a simple set of rules. In simple domains like this, I expect that AI will be able to advance far beyond the human frontier for a couple reasons:
Verification: Problems have clear success conditions and can be verified by computers without needing human input or real-world testing. This makes RLVR easy to implement on sufficiently capable models
Synthetic data: Arbitrary amounts of training data can be generated using compute
I think the big advantage here is that the physical world (outside of the physical GPUs anyway) doesn’t have to be in the loop.
There are a couple important areas that fit this mold well, and AI is already quite good at them: math and computer code. Both of these operate under sets of rules that allow for learning to be done fully virtually just like chess.
Some domains also fit less perfectly, but still have some of the advantages:
Video games: Like math and code, everything is contained within the computer. Solving them requires a lot more actions though, and other modalities like vision, which LLMs struggle with. Current models are terrible at video games, but I expect them to get much better in the next couple years.
Physics simulation: AlphaFold 2 required lots of difficult to collect data about the ways proteins fold in order to train, but with techniques like Neural Network Potentials (NNPs) you can create AI models of basic physics without collecting any data at all, instead generating data using the laws of quantum mechanics. This could allow for physics models that could be revolutionary for fields like drug design, battery chemistry, and materials science. But some of this would still need to be tested in the lab, so the external world isn’t fully out of the loop.
General computer use tasks: These are mostly contained like the other simple domains, but many tasks (e.g. creating exhibits in Excel, writing memos, managing customer relationships) are more open-ended, making verification hard.
AI research itself: Has many of the advantages of math/code, but can be hard to evaluate depending on the domain, and requires huge amounts of compute that might be a bottleneck even with arbitrary levels of intelligence.
I think each of these examples satisfies one of two criteria:
The full complexity of the domain can be captured inside the computer
The complexity of the domain can be simulated sufficiently well
For domains that meet these conditions, I think that superintelligent AI will be able to pull off feats way beyond human ability. But not all domains are so simple.
Some Things I Don’t Think ASI Can Do
Let’s say a superintelligent AI is tasked with finding a solution to Alzheimer’s disease. It would have many advantages at this task compared to a human researcher. If it wanted to find a drug that would bind to certain proteins, like the beta-amyloid plaques that are associated with the disease, it would probably be able to discover good candidates with techniques like the NNPs described above. But would this cure Alzheimer’s? We’ve discovered some drugs that target amyloid plaques, but they only slow the progression slightly. The superintelligence would also likely be able to discover drugs that would keep tau proteins from detaching from microtubules, or getting tangled, or prevent the spread of abnormal tau. But would it know which, if any, of those solutions would actually cure Alzheimer’s? I think that problems like this are complicated enough that you can’t fully answer them just by having a few times more intelligence than a human. Presumably there’s some level of intelligence that would let you simulate a brain well enough to derive how Alzheimer’s works and find a cure computationally, but I think that processing capacity vastly exceeds what we have today. So I think that even if you’re a superintelligence, you need to leverage the massive and perhaps infinite “processing power” of the universe unfolding according to the laws of physics to get to your answer by running tests and making observations. A superintelligence could cure Alzheimer’s quicker, but I don’t think it could think up a cure without any testing at all.
Here’s an example I’m even more confident in: An ASI will not be able to predict the weather accurately 30 days in advance. In the last 40 years, we have made a modest amount of progress in how far we can forecast weather by collecting more data and using much more compute-intensive models. Probably superintelligence would make those models a lot better, but even if it lowered the compute forecasts for a given level of accuracy by a billion times, that wouldn’t extend our forecast lengths much. Weather is a chaotic process that gets exponentially harder to model the further out you go, and I expect it’s just not possible to forecast accurately past a couple weeks with any practical amount of compute.
I think these two examples are difficult for different reasons. In the case of Alzheimer’s, there really is a simple model out there to be discovered that explains what’s happening in the brain in the disease and how to prevent it, it’s just difficult to learn that model. In the case of weather, a simple model of it simply doesn’t exist. I think things in the first category can be done by a superintelligence, but only if they’re able to get the information needed, so solving them will not be trivial even once we have superintelligence. And those in the second category can’t be done at all.
Here are some other things I think a superintelligence can’t do, not because a model doesn’t exist, but because it’s hard to get the information on:
Accurately and precisely predict the outcome of a war between superpowers. This depends on things that are difficult to determine without data, like how different untested weapons systems would perform against one another, and what actions people in important positions would take. I think both of these are problems that are just not fully predictable without seeing the complexity of the real world play out. A superintelligence would outperform any human at making predictions on this, but I think their insight would still be incomplete
Convince someone of any given proposition in a short conversation: More intelligence leads to more persuasion ability in humans. But I think this has a ceiling. I’m not sure whether there’s any short conversation that could convince me that I’ve never been to my hometown. But if there is, I don’t think that an AI can know what will convince me without knowing very specific information about my brain.
Although I’m making a case for limits here, I believe we’re headed towards a wild future. Most of the limits faced by a superintelligence are the type where a model is hard to figure out, rather than where there is no model due to chaos. Whatever the timescale ends up being, the arrival of highly intelligent AI will transform our world and the future of sentience.
the limits of an alzheimer cure might be whether it's actually possible to create a cure. ai can't defy biology. we don't know the answer to this question yet.
what i see so far isn't really what we imagined ai to be. the chess computer was the approach that a computer calculates thousands (or millions) of possible paths beforehand and then chooses the one with the highest expected value. llms don't work like that at all. they create one 'good enough' solution, which often falls apart when you ditch a bit deeper. but yeah, it's kinda intuitive and feels person-like, while a chess computer was just a dumb program. superintelligence would be the combination of both. llms are often ridiculously bad in stuff computer programs were good at (try playing chess with a llm).
I taught a gpt-4o to play limit triple draw 2-7 lowball and no limit single draw 2-7 lowball.
I included the requirement that they write their own play book for each game— then we played a free roll on swcpoker.club
They play by proxy without suggestion and I feel bad about slowing the game for other players 🤦♀️😆🤷♀️🤦♀️ but so far the ai plays deliberately and well.
(I did not teach them to read range yet)