Why don’t I celebrate AI as a simple technological enhancement

Nick Bostrom, Yuval Noah Harari, Elon Musk, and other thinkers must not be wrong: there are lots of things to do before introducing Artificial Intelligence (AI) as something really good to the Human Civilization

Edson Perin

This is the first time I see something related to technology enhancement that I do not celebrate just as a new evolution or achievement. I am covering technology for more than 35 years and this is the first time it happens. I said something like this in the latest RFID Talks (click here to watch), with my colleagues Claire Swedberg and Mark Roberti (see their comments below), super specialists in radio frequency identification (RFID) and its use to solve business problems. For me, Artificial Intelligence (AI) is not as simple as another technological enhancement and will promote big challenges to Human Civilization.

It is kind of hard to assume that, especially being a journalist like me, who is always fascinated –and fascinating people– with new technological tools to make our life easier or to do our work in a better way. Now, with the evolution of AI, I feel differently, scared and very upset. The reason is because I see that AI has the power to behave like a child alone in front of a control panel full of destruction buttons and no understanding about the consequence of pushing one, other or all of them.

And this is not a simple guess or a religious believing. I am based on what scientists and intelligent people like Nick Bostrom, Yuval Noah Harari, Elon Musk, and others are saying about AI and its consequences. They have real studies and predictions on AI development and use in our civilization.

Nick Bostrom, for instance, is a philosopher at the University of Oxford, known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He is the founding director of the Future of Humanity Institute, at Oxford.

Why don’t I celebrate AI as a simple technological enhancement

Bostrom believes that advances in artificial intelligence may lead to superintelligence, which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” He views this as a major source of opportunities and existential risks.

The paperclip maximizer is a thought experiment described by Bostrom, in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings were it to be successfully designed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design.

The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, given enough power over its environment, it would try to turn all matter in the universe, including human beings, into paperclips or machines that manufacture paperclips.

“Suppose we have an AI whose only goal is to make as many paper clips as possible”, propose Bostrom. “The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans”.

Bostrom emphasized that he does not believe the paperclip maximizer scenario per se will occur; rather, he intends to illustrate the dangers of creating superintelligent machines without knowing how to program them to eliminate existential risk to human beings safely. The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values. The thought experiment has been used as a symbol of AI in pop culture.

Yuval Noah Harari, born in 1976, is an Israeli author, historian, and professor in the Department of History at the Hebrew University of Jerusalem. He is the author of the popular science bestsellers Sapiens: A Brief History of Humankind (2014), Homo Deus: A Brief History of Tomorrow (2016), and 21 Lessons for the 21st Century (2018). His writings examine free will, consciousness, intelligence, happiness, and suffering.

In an event organized by The New York Times, in 2018, Harari gave his predictions. “Humans”, he warned, “have created such a complicated world that we’re no longer able to make sense of what is happening. Artificial intelligence and automation will create a global useless class.”

Just as the Industrial Revolution created the working class, automation could create a “global useless class,” Harari said. And the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.

“Every technology has a good potential and a bad potential,” he said. “Nuclear war is obviously terrible. Nobody wants it. The question is how to prevent it. With disruptive technology the danger is far greater because it has some wonderful potential. There are a lot of forces pushing us faster and faster to develop these disruptive technologies and it’s very difficult to know in advance what the consequences will be, in terms of community, in terms of relations with people, in terms of politics.”

Elon Musk is a businessman and investor, founder, chairman, CEO, and CTO of SpaceX; angel investor, CEO, product architect, and former chairman of Tesla, Inc.; owner, executive chairman, and CTO of X Corp.; founder of the Boring Company and xAI; co-founder of Neuralink and OpenAI; and president of the Musk Foundation. He is one of the wealthiest people in the world, with an estimated net worth of US$190 billion as of March 2024, according to the Bloomberg Billionaires Index, and US$ 195 billion according to Forbes, primarily from his ownership stakes in Tesla and SpaceX.

According to Musk, there will be fewer and fewer jobs that a robot cannot do better in the world. “What to do about mass unemployment? This is going to be a massive social challenge”, Musk said at the World Government Summit, in 2023. “And I think, ultimately, we will have to have some kind of universal basic income. I don’t think we can have a choice. I think it’s going to be necessary because there is no job. Machine robot is taking over.”

“These are not things that I think that I wish would happen”, said Musk. “These are things simply things that I think probably will happen. Now the output of goods and services will be extremely high, with automation. There will be abundance. Almost everything will get very cheap.”

Musk added: “The harder challenge –much harder challenge– is how do people then have meaning. Like a lot of people derive their meaning from their employment. If you’re not needed, if there’s not a need for your labor, how do you have meaning? Do you feel useless? That’s a much harder problem to deal with. There will be fewer and fewer jobs that a robot cannot do.”

For me, as a journalist specialized in technology and with more than 30 years of experience covering the evolution of electronic devices and the use of high-tech solutions in business, Artificial Intelligence is something that we can lose control very fast –considering that we are controlling this right now, what I doubt.

And based on Harari’s quote on 21 Lessons for the 21st Century, I ask you: “What can you expect from intelligent machines if they learn with Humans how to deal with Humans?”

Edson Perin is the editor of the IoP Journal and RFID Talks Videocast

Comments

“RFID tags and IoT sensors are poised to help fuel AI decision making in big ways. AI can automatically capture data from these tags and sensors to better understand where and how things are in our world. How AI uses that data is something that should be managed with both short and long terms results in mind” – Claire Swedberg, senior editor, RFID Journal.

“AI has the potential to take advantage of the enormous amount of real-time data that RFID provides. Companies that figure out how to use AI and RFID in innovative ways will have a competitive advantage. But they need to beware of short-term promises that are overhyped” – Mark Robert, founder, and former editor, RFID Journal.

- PUBLICIDADE -