There is no inevitable dark side to AI | GUEST COMMENTARY

A robot designed by Engineers Arts and called AMECA, interacts with visitors during the International Conference on Robotics and Automation ICRA in London, Tuesday, May 30, 2023. The 2023 ICRA brings together the world's top academics, researchers, and industry representatives to show the newest developments. Ameca is a humanoid robot platform for human-robot interaction. (AP Photo/Frank Augstein)

Many people fret over some imagined, dystopian future to which artificial intelligence, or AI, might lead humanity. Yet, human beings are irrepressibly wandering wonderers, ever curious to explore what possibilities lie promisingly around the next corner. AI is just another manifestation of this impulse to discover and innovate. Indeed, ever since Prometheus gave fire to humanity, technology has been a key pillar in defining what makes us, us.

New and evolving technology has borne two complementary faces: one face looking rearward at what historically incited us to get where we are today, and one looking expectantly forward to alternative futures from which we’ll exercise agency in choosing our path ahead. Reproving alarms intended to red-flag AI have been repeatedly sounded, though incautiously doused with inflated references to existential risks, as if AI is bound to trigger a mass extinction event lurking on the horizon or, at minimum, the destruction of civilization.


That’s not to say such cautionary notes should be ignored. Some technologies have indeed displayed a potential for both good and bad. One example is nuclear technology, of course. On the dark side, nuclear fission led to menacingly bristling arsenals of nuclear weapons; however, on the light side, the technological know-how led to the generation of nuclear power for improvements to humanity’s quality of life, in many quarters of the world rendering life more enriched than it might otherwise have been.

Yet, there’s no reason to subscribe to an inevitable dark side to AI, though a common clarion call aims to convince us differently, as if AI’s trajectory were unmanageable and bent toward harm. The argument, however, conflates so-called “weak AI” and “strong AI.” The singular tasking of weak AI, dependent on relatively unpretentious algorithms, is the stuff of winning at chess or the game of “Go.” Those are the robots whirring tirelessly and with precision, whether tasked to assembly lines, mortgage applications, facial recognition, navigating roadways, targeting weapon systems on the battlefield, health care forms, finance, being smart assistants leaping to our commands and even the essay-writing chatbots and art-generating bots that have captured the imaginations of the public, despite errors and other quirks.


On the other hand, strong AI, more commonly called artificial general intelligence (AGI), is the longer-term future of this technology. One might think of AGI as a non-biological analog or equivalence of the human brain’s neural network. The aim will be for AGI’s cognition suite to include true thinking, understanding, creativity, imagination, thought-experimentation, and complex problem-solving, among other skills. Whereas today we may ask “what is it like to be a person?” eventually we are likely to ask “what is it like to be an AGI system?” How, then, AGI perceives and interprets the world accommodates a multiplicity of experiences.

Eventually AGI will self-optimize — that is, have the advanced cognitive capacity to manage its own evolutionary development, without further major intervention by what might become less-efficient and bias-prone human coders whose well-intentioned intercession might be seen as hindering, not helping. AGI may no longer need to outsource its development to human programmers.

Speculatively, this will replicate humanlike consciousness, attaining a tipping point at which it matches and ultimately exceeds human intelligence in its broadest sense: thought, knowing, understanding, awareness, perception, reasoning, intuition, experience, information processing and much more. That speculative tipping point, or crossover event, is sometimes referred to as a singularity. While neuroscientists and philosophers of mind currently continue to unspool what human consciousness is, similar efforts will be made by them, AI technologists and others to unspool AGI consciousness.

The reproving alarms mentioned above will remain unnecessary, however, as safeguards can be preemptively and iteratively put in place, to preclude unintended or rogue happenings. Importantly, such decision-making free agency will require AGI to include an understanding of humanity’s culturally influenced moral norms bearing on decisions and behaviors, as well as rights and responsibilities. As such, AGI is envisioned to prove supportive, with operational and ethical safeguards, capable of remarkably complex thinking to aid humanity’s cause.

Whether that variant of AGI would seek advantage rather than collaboration, in ways that might destabilize the common good, remains highly unlikely. At the moment, in some cases what’s easy for humans to do is extraordinarily hard for machines to do, while the converse is true, too; but from the standpoint of AGI, this will even out. Humans won’t be displaced, or harmed, but creative human-machine partnerships will change radically for the better.

Keith Tidman ( writes essays on political, social and scientific opinion.