Artificial Intelligence

By David Amirsadri

You can’t fetch the coffee if you’re dead.

Stuart Russell, AI Researcher

The Terminator, the well-known 1984 Arnold Schwarzenegger film, is typical of the public’s view of artificial intelligence. Evil Robot does evil killy things in evil ways– all capped with excessive use of pyrotechnics. In contrast to this “Hollywood” view of artificial intelligence, reality paints a far more complex and nuanced picture. On the one hand, AI may eradicate many societal scourges– poverty, war, climate change. On the other hand, it may quickly surpass human control, a scenario haunting experts’ dreams. The safe development of AI requires careful consideration from many angles– considerations that must be made now. By no means is this a recent concern, shared only by a small band of esoteric luddites– far from it. As far back as 1951, Alan Turing– a cryptographer and founding father of computer science– warned of the potential dangers of AI, warning that AIs would make humanity feel “greatly humbled.” We have long understood the danger that lies in creating something more intelligent than one’s own species. Humanity must ask itself: how can we ensure that AI works for us, and not the other way around?

AI is, in a nutshell, a synthetic replication of intelligent behaviour. Today’s AI is of the “narrow” or “weak” variety– that is, it performs very specific tasks, and outperforms human cognition in those areas. The ultimate goal is to create “strong” AI, or AGI, which will outperform human beings in all arenas of cognition. This is not a far-fetched fantasy– as computing power increases, and its cost decline, AI is able to do many things now that it previously could not. While we are not sure of the exact date at which we will have AGI, prudence dictates that humanity needs to start preparation now. The Future of Life Institute, a think-tank that studies existential threats to humanity, has outlined certain principles for AI development. For one thing, AI must not be “undirected”– the creation of intelligence must be a net good. Moreover, the development of AIs must be science-based and collegial– we should not find ourselves in a situation where AI development devolves into a competitive race that results unsafe practices. 

From an ethical standpoint, we must ask ourselves what sort of data we are inputting into AI systems. In what ways is the data biased for the worse? In what way do these inputs serve private interests at the detriment of the common good? AIs will steadfastly adhere to the past, thus resulting in the entrenchment of many societal ills, like racism and sexism. Moreover, the lack of transparency in the decision-making capacities of AIs are extremely concerning. This amorality raises serious questions concerning the roles we want AIs to play. What is to become of individuals displaced by AI in the labour market? On the one hand, it is absolutely true that AI increases the skill and competency of certain professionals, leading to work for mathematicians and data scientists. By contrast, AI will likely wreak havoc on the job market for low skilled workers. According to the Royal Society, technology has already greatly affected the job market– and has had a greater impact than globalization. The widespread adoption of artificial intelligence will likely result in a greater degree of polarization between highly educated and less educated workers– a dire situation in an age of growing inequality. Though these concerns are imminent and apparent, there exists a need for a great deal more research in this area: a recent article in the Proceedings of the Natural Academy of Sciences (PNAS) explains that current research does not currently account for the complexities and nuances of modern work, as well as the potential impact of AIs on “broader economic dynamics” and global institutions like trade and immigration.

AGI also raises safety concerns in its ruthless accomplishment of goals, speaking to the importance of goal-alignment. The late Stephen Hawking likened the situation to our relationship with ants: “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a… energy project and there’s an anthill in the region… , too bad for the ants. Let’s not place humanity in the position of those ants,” he noted. AIs, bent on achieving their goal, would disable their off switch. After all, as Stuart Russell, a UC Berkeley computer scientist points out, a machine cannot fetch coffee if it is dead. This is precisely what makes the advent of autonomous weapons so dangerous– they would be devastating in the wrong hands. Ironically, it is the efficacy of AI that should scare us.

All of this begs the question: how can we design AI in a safe, ethical manner, so as to reap the most benefits while avoiding any nasty side-effects? Stuart Russell has outlined a number of principles to that end. For one thing, AIs must be concerned only with the realization of human value– that is, they must be “altruistic,” not self-interested. Moreover, AIs must not have any preconceived notion of what these values are– they must learn through “observation of human choices.” This will ensure that AIs will learn to respect and to not violate human preferences.

AI, like most major issues facing humanity today, is hardly black-and-white. It, like so many issues of the twenty-first century, requires clear-eyed thinking and strong leadership. Only time will tell if we have properly responded to the challenges presented by AI.

Leave a Reply

Your email address will not be published. Required fields are marked *