Don’t Worry About AGI, Worry About ANSI

Don’t Worry About AGI, Worry About ANSI

With the recent advances in AI there has been a huge increase in discussion around AGI (Artificial General Intelligence). The exact definition of what will qualify as AGI is murky, but it is generally thought of as “capable of doing anything that a human could do”. Of course framing things in such a human centric way opens up a can of worms. For some definition of that it requires that the AI be capable of empathy, love, and the full range of human emotions. It also basically requires a physical humanoid body capable of navigating the world. At this point I’m pretty certain that most people won’t accept that a machine has “human-level intelligence” unless it occupies a high quality android body (and even then many will question it).

But even if we can accept a “brain-in-a-box” AI as being AGI if it can perform basically any action as well as a human could within a non-corporeal context, even then we currently don’t have a clear path to achieving that. Simply scaling up the transformer architecture won’t get us there, since on its own it has no long term memory capability, nor agency. Those systems can be bolted on, and perhaps a scaled up LLM (Large Language Model) with fine tuning around long term memory use, and bolted-on agent capability will resemble something like AGI, but it is no sure thing. Certainly current attempts at this sort of cognitive architecture using GPT-4 have so far fallen short. This has led AI critics to say that AGI could still be decades away (and perhaps it is).

What seems to be largely ignored in discussions is what I would call ANSI (Artificial Narrow Super Intelligence). We don’t need a system capable of broad general intelligence for it to be incredibly powerful and fundamentally disruptive. And the path to it is already rather clear. In fact in specific narrow fields we already have it. AI can already play chess and Go better than any human. Within those narrow fields it is already a kind of super intelligence. GPT-4 is far more interesting of course, since it has a much more general intelligence, coupled with a breadth of knowledge that far surpasses any human. In terms of raw reasoning skill it still falls short of even an average human, but it can often mask this due to the huge amount of knowledge it is able to recall nearly perfectly.

Now the race is on to hyper-scale LLMs after ChatGPT wowed the world, so we will almost certainly see much more powerful models coming out in the next few years. In fact one of the co-founders of DeepMind predicted that within 2-3 years we will see models with 100x the training compute input of GPT-4. Now we know that simply scaling up a massive LLM won’t get us AGI on its own, but it may very well get us ANSI. Within the bounds of the LLM limitations (no long term memory, no agency) it could be many times more intelligent than the smartest human. It might not be able to do anything autonomously, but it could act as a sort of oracle, where it might be able answer questions that no human has been able to answer. This might not count as AGI, but it could easily kick off the singularity nonetheless, as now human engineers could simply ask the system to solve the hard problems in many fields (like synthetic biology, fusion power, nano bots, lithography, and of course AI).

In many ways this is really the optimal scenario, since it mostly side-steps the alignment problem. An ANSI would be able to help transform technology and offer huge benefits to human society, without posing any real danger since it would have no agency, and no long term planning capability. It may also provide huge economic growth without causing a complete upheaval to society through job destruction. While reaching AGI would likely mean that all jobs would be replaced by AI, an ANSI without agency would be unable to replace most jobs (though it would still cause a large amount of job displacement). That doesn’t mean it is without danger, it would certainly be rife for misuse by malicious actors, and there is still the possibility that some emergent property of increased intelligence would give it some limited agency which it might use to manipulate the human users of the system.

The future is incredibly difficult to predict at this point, but it seems inevitable that we will see LLMs scaled massively beyond GPT-4 in the next few years (and perhaps we’ll see another architectural breakthrough like the transformer that will completely change the game again). Given how impressive GPT-4 already is within the narrow fields that it is good at, it seems easy to extrapolate that a much more powerful model should show some truly superhuman capabilities in those same narrow fields.

Comments are closed.