As already indicated in the last blog post, I would like to focus more on the alleged existential risk of artificial intelligence in the following. Since, with my current level of knowledge, I have actually often classified AI as a cool tool that can do cool things, I would like to use my further research to find out what negative sides AI can bring with it. Above all, I want to find out at what point AI itself can be classified as an existential risk and to what extent AI can have an impact on other existential risks.
Before I do this, however, I would first like to find out what AI actually is.
Defining the term artificial intelligence seems to be extremely difficult, as - just like definitions about Existential Risks - there are entire papers in the literature trying to define the term. This shows how complex the whole topic of artificial intelligence is.
It seems therefore essential to take a closer look at this topic and, especially in times of digitalisation, to deal more with the complexity, the advantages but also the risks.
Oxford Languages has found a relatively simple definition and describes artificial intelligence as:
„the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.
Accenture describes AI as:
“Artificial intelligence is a constellation of many different technologies working together to enable machines to sense, comprehend, act, and learn with human-like levels of intelligence. Maybe that’s why it seems as though everyone’s definition of artificial intelligence is different: AI isn’t just one thing.”
Here it can again be seen that AI is not easy to describe, as it “isn’t just one thing”. AI can already be found in many daily situations in our lives:
We already encounter AI almost every day without even noticing it. Examples include social media, autonomous vehicles and aircraft, digital assistants, and many more. In order to show how many situations AI is already being used in today, I would like to present the entire list created by Dataconomy.com. For more detailed descriptions, the website is definitely worth a visit!
As you could now see here, AI can already be discovered in many situations in our daily lives. However, not all AI is the same, but a distinction can be made in terms of AI’s capabilities. It can be deeper differentiated between so-called Artificial narrow (or “weak”) Intelligence, Artificial general (or “strong”) Intelligence and Artificial Superintelligence:
Artificial Narrow Intelligence (ANI) can be seen as AI, “which performs a single task or a set of closely related tasks” (Accenture). It “refers to any AI that can outperform a human in a narrowly defined and structured task. It is designed to perform a single function like an internet search, face recognition, or speech detection under various constraints and limitations. It is the constraints that lead people to refer to these functions as ‘narrow’ or ‘weak’” (levity.ai). Narrow AIs are therefore copying human behavior “based on a set of rules, parameters, and contexts that they are trained with” and are not thinking for themselves (levity.ai).
Examples for ANI:
Accenture says that “these systems are powerful, but the playing field is narrow: They tend to be focused on driving efficiencies. But, with the right application, narrow AI has immense transformational power—and it continues to influence how we work and live on a global scale.”
Such ANI systems can bring some advantages: For one thing, they can increase productivity and efficiency, as they can mimic human behaviour and thus relieve or facilitate people’s work. In addition, ANI systems help to make decisions. Algorithms can recognise patterns and make better decisions based on them. Similarly, a better customer experience can be achieved, for example through the above-mentioned recommender systems (levity.ai).
The main difference between Artificial Narrow Intelligence and Artificial General Intelligence is that with ANI, the AI only imitates human behaviour, takes over repetitive tasks, etc.. With AGI, however, the AI can develop itself further, learn skills and behave and develop more and more like a human being. AGI is still very nascent, however, and ANI is therefore the current state of AI, AGI the state that would like to be achieved (levity.ai). Since ANI cannot yet do this, “human-machine collaboration is crucial - in today’s world, artificial intelligence remains an extension of human capabilities, not a replacement” (Accenture).
Examples for AGI can be seen within:
At first I only found the Accenture source, which only dealt with ANI and AGI. In the course of my research, I came across levity.ai, which also listed Artificial Superintelligence (ASI).
ASI is the highest level in which “Artificial Superintelligence (ASI) would be capable of outperforming humans” (levity.ai).
The aim of this section of my research was to gain a basic understanding of AI and to derive the next steps of my research. With the realisation that AI has three levels, and the highest level - Artificial Superintelligence - would be able to outperform humans, I can confirm my one idea that AI itself can pose an existential risk. Moreover, AI is present in so many everyday situations that I would like to take a closer look at the extent to which AI can possibly have an indirect effect on existential risks with an intermediate step.
Therefore, I will now divide my research into two parts: On the one hand, AI as an existential risk and, on the other, AI as an indirect influence on existential risks.
Back to the post about Existential Risks!
Take me to the post about Artificial Intelligence as an Existential Risk!
Take me to the post about Artificial Intelligence as indirect influence on Existential Risks!