AI Timelines Should Be Sped Up
Response to this blog post -- tldr: "GPT-3 made me ever so slightly more optimistic and simultaneously uncertain about my previous AGI timeline predictions -- b/c it could be used to make better tools".
Let's ignore attempting to have "an AGI discussion" as it is rarely founded on proper definitions. Kudos to the author of the original post for attempting to lay out a simple working definition for his discussion:
I’m going to take artificial general intelligence (AGI) to mean an AI system that matches or exceeds humans at almost all (95%+) economically valuable work.
I want to make (what is basically) a bet that the timeline (for AI systems to fit his working "AGI" definition) should be moved up considerably:
- 10% chance by 2030
- 50% chance by 2035
- 90% chance by 2040
- 99% chance by 2045
And if I am wrong it will be because everything should be shifted up 5 years. This is a practice in having a much larger amount of optimism and certainty.
And this is for one simple reason: artificial life ("AL").
I’ve long agreed that unsupervised learning is the future, and the right way to do things, as soon as we figure out how to do so.
I could not agree more, and all current lines of research into unsupervised methods essentially base themselves on prior methodologies. However, I disagree where the author states:
Well, it would require not needing many more new ideas ... because I don’t think there’s much time for the field to do a full-scale paradigm shift.
The fault is not in the scaling, we can scale anything. The fault is in the paradigms (or specifically a restriction to a handful of paradigms e.g., embeddings, encodings, self-supervised learning, etc.). But the author is correct: we do not have any time to waste before acting on this fact.
What we need is to approach AI from an entirely different angle, and that angle is not derived from neurological inspiration or over-parameterizing statistics. That angle will be derived from the class of programs that we call "life". The sheer complexity but subtle efficiency of all living organisms should be the greatest source of data and inspiration we could find in designing software moving forward.
The set of all artificial intelligences is subsumed by the set of all artificial lifeforms. A study of artificial life will provide humanity with much more profound results:
- A more mathematically complete picture of our own biology, the biology of all life on the planet, and all possible biologies
- Methodologies for better interpreting complex chemoinformatics and bioinformatics data and analysis results
- Revolutionary technology in energy, medicine, robotics, space, agriculture, computation, economics, human lifespans, and of course: Artificial intelligence.
The results of AL will not be easily understood or perhaps even understandable (because most of AL will be almost completely unsupervised data creation, data consumption, program-synthesis, model-creation, task optimization, self-modification, etc.) but it will show absolutely profound results.
This is all very high-level but that is all this post is supposed to be: a teaser for what the 2020s can be.