CS220AU-DP-2022

Existential Risks

At the beginning, I could relate more to the term “existential risks” than to the term “digital identities”. These must be risks of some kind that threaten the existence (of humanity?). In order to broaden my understanding of this, I first went in search of definitions of the term. It was striking that there are entire papers (e. g. this) that only deal with the definition of “existential risks” - it seems, again, that it is a very complex topic. If you’re more interested in that, go ahead and have a look at that linked paper, it is very interesting and worth aread!!
For me personally, the following definition is the most appropriate.

Existential Risk: An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.

I also noticed during my research that people often talk about global catastrophic risk or existential threat. Of course, these terms can be specified more precisely, but in this project, we regard them as synonyms.

During my research I found this webpage, that collected the following Existential Risks:

“Nuclear war…

…was the first man-made global catastrophic risk, as a global war could kill a large percentage of the human population. As more research into nuclear threats was conducted, scientists realized that the resulting nuclear winter could be even deadlier than the war itself, potentially killing most people on earth.

Biotechnology and genetics…

… often inspire as much fear as excitement, as people worry about the possibly negative effects of cloning, gene splicing, gene drives, and a host of other genetics-related advancements. While biotechnology provides incredible opportunity to save and improve lives, it also increases existential risks associated with manufactured pandemics and loss of genetic diversity

Artificial intelligence (AI)…

… has long been associated with science fiction, but it’s a field that’s made significant strides in recent years. As with biotechnology, there is great opportunity to improve lives with AI, but if the technology is not developed safely, there is also the chance that someone could accidentally or intentionally unleash an AI system that ultimately causes the elimination of humanity.

Climate change…

… is a growing concern that people and governments around the world are trying to address. As the global average temperature rises, droughts, floods, extreme storms, and more could become the norm. The resulting food, water and housing shortages could trigger economic instabilities and war. While climate change itself is unlikely to be an existential risk, the havoc it wreaks could increase the likelihood of nuclear war, pandemics or other catastrophes.”

other existential risks:

I have found many other sources, most of which agree with the Existential Risks just listed.
The following risks were also mentioned sporadically:

Risks

What I also found very exciting was when, in my research, I came across a website that not only listed or described the existential risks, but also indicated with what probability they could occur:

Again, I have no idea what to do with this information, but I found the insight very exciting!

Now that I have been able to get an overview of various existential risks, I would like to pick out one of them to take a closer look at. In doing so, I have chosen artificial intelligence. Above all, the fact that the use of AI does not have to lead directly to something bad motivated me to find out more about the topic and to look at both the opportunities and the possible influence on existential risks. Therefore, in my next blog post, I would like to go into more detail about AI in order to be able to derive possible dangers, areas of application or simply next research steps from there.

Back to the post about Digital Identities!
Take me to the next post about Artificial Intelligence!