———————————————————————————
Attention!
This blogpost is not finished yet and is still in progress!
Check back in 2 weeks and you should be able to find a finished blogpost! :)
———————————————————————————
In my last blog post, I considered at what point artificial intelligence itself becomes directly an existential risk to the earth’s population.
Artificial Intelligence itself is not inherently an existential risk. Only the “highest” level of it, Artificial Superintelligence, can pose a threat to humanity.
The extent to which artificial intelligence can also have an impact on the existential risks of humanity in other ways is something I would now like to evaluate within this (or possibly the following) blog post. The aim here is not to look at the direct connection, but rather at indirect connections with possible intermediary instances.
When I initially thought about the extent to which artificial intelligence can have an impact on existential risks, I rolled up my thoughts in a certain way from behind.
Let’s go back a few steps in our minds and recall the previously explained existential risks: nuclear war, war in general, biotechnology and genetics, climate change, other emerging technologies (forms of geo-engineering and atomic manufacturing), naturally occurring existential threats (asteroids, …) and engineered pandemics - AI is left out of this for now.
While in my opinion there is no connection between AI and naturally occurring existential threats, I have repeatedly - especially when questioning the existential risks of war (both “normal” and nuclear), climate change and engineered pandemics - come across a very popular “intermediary”, which can be influenced by AI on the one hand and can then influence these existential risks in the next: Politics.
AI is not only used in the private sphere, as already described in the blogpost about AI in general. It is also being used more and more in the political environment. According to the Harward Business Review these “applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world.” (Martinho-Truswell.”How AI Could Help the Public Sector”. Harvard Business Review.)
Two key differences of usage can be observed in the existing literature. AI can be used in politics in two ways: First, if a government is in place, AI can be used within its activities to achieve certain benefits. Second, AI can also be used before a government is in office. The latter in particular will now be examined in more detail, as this entails some risks.
The easier part of the usage of AI in politics is the part, where applications with narrow AI are used to simplify administrative tasks. Hila Mehr identified in her research paper about Artificial Intelligence for Citizen Services
and Government six main areas of application for AI applications in governments:
In summary, AI can bring a great advantage in government environments when it comes to either repetitive tasks or dealing with large amounts of data.
However, as already announced, AI is not only used in already existing governments, but also in the creation of such a government. This may entail some risks.
As we could see within several presidential campaigns (Barack Obama, Narendra Modi and Donald Trump), AI can be used as a successful political tool: In order to gain more supporters and therefore, more votes in an election, politicians or their background were studying the peoples’ behaviour through AI and thus modified their views. Thus, politicians adapted to the needs of the population based on the knowledge gained through AI, with the aim of reaching more voters.
On the other hand, AI can also be used not only to observe the behaviour of the population and adapt actions to it, but also to specifically influence the behaviour of the population. This can be done, for example, through AI in social media.
Sources that may be relevant:
AI can be used to simplify political and administrative activities, but also as a means to influence election campaigns. Depending on the intended use, AI can be seen as both a support and a threat. If it is misused and, for example, a radical party wrongly comes to power, the use of AI can greatly increase the risk of existential catastrophes insofar as this party produces nuclear weapons, for example, and thus increases the risk of a nuclear war.
Back to the post about Artificial Intelligence as an Existential Risk!
Back to the Mainpage!