Beyond Imagination: The Rise of Artificial General Intelligence
Exploring the Enigma of AGI and the Future It Holds for Humanity
This article was written for the Comprendre newsletter: subscribe to get the next article and more!
In recent months, the term AGI, or Artificial General Intelligence, has echoed through the corridors of the AI community, stirring a mix of apprehension and excitement. This concept, which represents an AI with the comprehensive intellect of a human, has become a focal point for industry leaders and innovators alike. Elon Musk, comparing fearfully AI to “summoning the demon”, contrasts sharply with the optimism of organizations like OpenAI, who have embedded the pursuit of AGI into their mission statement, aiming to benefit all of humanity.
Yet, despite the fervent discussions, AGI remains an enigma to many. What exactly constitutes an AGI? How close are current technologies like ChatGPT to achieving this milestone? And importantly, how do we navigate the path to AGI responsibly, mitigating risks while embracing potential benefits?
Let’s embark on a journey to demystify AGI, exploring its intricacies and pondering the profound questions it raises about our future.
Decoding AGI
Conceptually, an Artificial General Intelligence (AGI) is a AI with general intelligence comparable to, and ultimately perhaps greater than, that of human beings. It would possess the ability to comprehend, learn, and apply knowledge with a versatility and adaptability akin to a human.
As beautifully explained by Ben Goertzel (founder of SingularityNET) : “AGI may be thought of as aimed at bridging the gap between current AI programs […] and the types of AGI systems commonly seen in fiction — robots like R2D2, C3PO, HAL 9000, Wall-E and so forth.”
Because of its broad definition, AGI might sometimes be complicated to differentiate from other types of AI. Especially considering large language models (LLM) like ChatGPT or Llama, already giving people a sense of self. To distinguish them, we need to pay more attention to the meaning of “General Intelligence”. In the AI world, this term would encompasses the capacity to:
- Achieve diverse goals and perform a wide range of tasks across different contexts and environments.
- Navigate unforeseen challenges not anticipated by its creators.
- Generalize acquired knowledge, transferring insights from one domain to another.
Measuring AGI: The Quest for a Benchmark
Is ChatGPT, with its advanced capabilities, an AGI? While it exhibits some traits of an AGI, it falls short of the full spectrum of human-like intelligence, lacking, if you ask it: “consciousness, self-awareness, and the ability to generalize across a broad range of tasks.”
This leads us to the conundrum of measuring AGI. How do we assess consciousness or self-awareness in a machine? Yuval Noah Harari posits that consciousness is a deeply personal experience, one that is inherently subjective and elusive to external validation. Yet, even if there is no standard test, we can still tell apart a conscious being, right? At least, that’s what Alan Turing — a famous scientist, precursor of AI — though of when he invented the famous Turing Test.
Imagine this scenario: you engage in a conversation with two entities, one a human and the other an AI, without knowing which is which. Both are tasked with persuading you that they are, in fact, the human participant. If you are unable to reliably distinguish between the two, then the AI has succeeded — it has passed the Turing Test. This simple yet ingenious experiment is designed to assess a machine’s capacity for demonstrating intelligent behavior that is equivalent to, or indistinguishable from, human intelligence.
While ChatGPT may seem to fulfill the criteria of the Turing Test, the lack of standardized parameters makes it challenging to declare with certainty that AGI has been achieved.
Researchers continue to develop alternative tests to evaluate AGI, such as the AGI Turing Box and practical assessments like the Coffee Test imagined by Steve Wozniak (co-founder of Apple), where an AGI should be able to go into an average American home and figure out how to make coffee, or the Robot College Student Test. However, these remain theoretical and have yet to gain widespread acceptance.
The Current Landscape of AGI Development
If the wonder operated by GPT-4 don’t deserve the AGI appellation, then what does? As of now, no technology has attained the status of AGI. It remains a theoretical construct, a distant goal pursued by numerous organizations. As to who will and when it will happen, only time will say.
A study realized in 2020 found 72 companies and lab working on AGI. The most notable one being OpenAI (creator of ChatGPT and Dall-E), DeepMind (creator of AlphaFold), SingularityNet, Blue Brain or the Human Brain Project. Since then, we can also add a few big competitor, like Anthropic (creator of Claude), IBM, Google Brain or the recent xAI founded by Elon Musk. The race for AGI isn’t one to miss!
These entities employ varied strategies, from enhancing software capabilities through increased parameters and data to innovating hardware solutions and experimenting with quantum computing. Predictions on the timeline for achieving AGI range from the optimistic 2029 forecast by futurist Raymond Kurzweil to a more conservative window between 2040 and 2060, as suggested by a survey of AI researchers.
Navigating the Risks and Rewards of AGI
The ascent of AGI is fraught with both promise and peril. High-profile figures like Elon Musk, Bill Gates, and Stephen Hawking have voiced concerns about the potential dangers of advanced AI.
The primary challenge with Artificial General Intelligence (AGI) lies in its enigmatic nature. We have yet to fully grasp how it will manifest, its definitive characteristics, and its limitations. This veil of uncertainty inevitably gives rise to concerns about its potential impact.
As a more advanced form of AI, AGI amplifies existing concerns associated with artificial intelligence. Its capabilities could disrupt the job market, potentially affecting the global economy, or be harnessed for intensified surveillance, both private and public, which may undermine the foundations of democratic societies.
The inherent risks of AGI are also unique due to the considerable power it may wield. If developed without rigorous ethical standards and safety measures, AGI could inadvertently adopt undesirable traits or engage in harmful actions. This could stem from a precipitous race towards AGI, driven by entities with limited regard for safety or ethical considerations.
Moreover, once AGI comes into existence, it could pose challenges stemming from inadequate oversight. For instance, earlier this year, a tragic incident was reported where a widow attributed the suicide of her husband to the interactions with Chai Research’s Eliza Chatbot, highlighting the potential dangers of mismanagement.
Increasingly, experts and laypeople alike are voicing concerns about the existential risks posed by AGI. There is apprehension that AGI, akin to HAL from “2001: A Space Odyssey” or Skynet from “The Terminator,” could become adversarial or elude the control of its creators. These scenarios epitomize the Alignment Problem and the Containment Problem, which are central to the discourse on the safe development of AGI.
Yet, the potential benefits are equally transformative. AGI could offer solutions to global challenges such as climate change or provide unprecedented, personalized support for mental health and well-being.
In conclusion, AGI remains a concept shrouded in mystery, a testament to humanity’s enduring quest to reach the zenith of technological evolution. As we stand on the precipice of this new frontier, we are reminded of Plato’s humbling admission: “I know that I know nothing.” This acknowledgment of our limitations is perhaps the most thrilling aspect, for it is in the unknown that discovery and progress thrive.
Join us next week as we delve into another frontier of the future: Quantum Computing. Until then, may your week be filled with curiosity and discovery!
This article was written for the Comprendre newsletter: subscribe to get the next article and more!