Nick Bostrom | |
---|---|
Born | Niklas Boström 10 March 1973 Helsingborg, Sweden |
Education | |
Spouse | Susan[1] |
Awards |
|
Era | Contemporary philosophy |
Region | Western philosophy |
School | Analytic philosophy[1] |
Institutions | Yale University University of Oxford Future of Humanity Institute |
Thesis | Observational Selection Effects and Probability (2000) |
Main interests | Philosophy of artificial intelligence Bioethics |
Notable ideas | Anthropic bias Reversal test Simulation hypothesis Existential risk Singleton Ancestor simulation Information hazard Infinitarian paralysis[2] Self-indication assumption Self-sampling assumption |
Website | nickbostrom |
Nick Bostrom (/ˈbɒstrəm/ BOST-rəm; Swedish: Niklas Boström [ˈnɪ̌kːlas ˈbûːstrœm]; born 10 March 1973)[3] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He is the founding director of the Future of Humanity Institute at Oxford University.[4]
Bostrom is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002)[5] and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times Best Seller.[6]
Bostrom believes that advances in artificial intelligence (AI) may lead to superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". He views this as a major source of opportunities and existential risks.[4][7]
Early life and education
Born as Niklas Boström in 1973 in Helsingborg, Sweden,[8] he disliked school at a young age and spent his last year of high school learning from home. He was interested in a wide variety of academic areas, including anthropology, art, literature, and science.[1]
He received a B.A. degree from the University of Gothenburg in 1994.[9] He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] He also did some turns on London's stand-up comedy circuit.[8] In 2000, he was awarded a PhD degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and probability.[10] He held a teaching position at Yale University from 2000 to 2002, and was a British Academy Postdoctoral Fellow at the University of Oxford from 2002 to 2005.[5]
Research
Existential risk
Bostrom's research concerns the future of humanity and long-term outcomes.[4][11] He discusses existential risk,[1] which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". Bostrom is mostly concerned about anthropogenic risks, which are risks arising from human activities, particularly from new technologies such as advanced artificial intelligence, molecular nanotechnology, or synthetic biology.[12]
In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[11]
In the 2008 essay collection, Global Catastrophic Risks, editors Bostrom and Milan M. Ćirković characterize the relationship between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[13] and the Fermi paradox.[14]
Vulnerable world hypothesis
In a paper called The Vulnerable World Hypothesis,[15] Bostrom suggests that there may be some technologies that destroy human civilization by default[lower-alpha 1] when discovered. Bostrom proposes a framework for classifying and dealing with these vulnerabilities. He also gives counterfactual thought experiments of how such vulnerabilities could have historically occurred, e.g. if nuclear weapons had been easier to develop or had ignited the atmosphere (as Robert Oppenheimer had feared).[17]
Superintelligence
In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom argues that superintelligence is possible and explores different types of superintelligences, their cognition, the associated risks. He also presents technical and strategic considerations on how to make it safe.
Characteristics of a superintelligence
Bostrom explores multiple possible paths to superintelligence, including whole brain emulation and human intelligence enhancement, but focuses on artificial general intelligence, explaining that electronic devices have many advantages over biological brains.[18]
Bostrom draws a distinction between final goals and instrumental goals. A final goal is what an agent tries to achieve for its own intrinsic value. Instrumental goals are just intermediary steps towards final goals. Bostrom contends there are instrumental goals that will be shared by most sufficiently intelligent agents because they are generally useful to achieve any objective (e.g. preserving the agent's own existence or current goals, acquiring resources, improving its cognition...), this is the concept of instrumental convergence. On the other side, he writes that virtually any level of intelligence can in theory be combined with virtually any final goal (even absurd final goals, e.g. making paperclips), a concept he calls the orthogonality thesis.[18]
He argues that an AI with the ability to improve itself might initiate an intelligence explosion, resulting (potentially rapidly) in a superintelligence.[19] Such a superintelligence could have vastly superior capabilities, notably in strategizing, social manipulation, hacking or economic productivity. With such capabilities, a superintelligence could outwit humans and take over the world, establishing a singleton (which is "a world order in which there is at the global level a single decision-making agency"[lower-alpha 2]) and optimizing the world according to its final goals.[18]
Bostrom argues that giving simplistic final goals to a superintelligence could be catastrophic:
Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins.[20]
Mitigating the risk
Bostrom explores several pathways to reduce the existential risk from AI. He emphasizes the importance of international collaboration, notably to reduce race to the bottom and AI arms race dynamics. He suggests potential techniques to help control AI, including containment, stunting AI capabilities or knowledge, narrowing the operating context (e.g. to question-answering), or "tripwires" (diagnostic mechanisms that can lead to a shutdown).[18] But Bostrom contends that "we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out". He thus suggests that in order to be safe for humanity, superintelligence must be aligned with morality or human values so that it is "fundamentally on our side".[20] Potential AI normativity frameworks include Yudkowsky's coherent extrapolated volition (human values improved via extrapolation), moral rightness (doing what is morally right), and moral permissibility (following humanity's coherent extrapolated volition except when it's morally impermissible).[18]
Bostrom warns that an existential catastrophe can also occur from AI being misused by humans for destructive purposes, or from humans failing to take into account the potential moral status of digital minds. Despite these risks, he says that machine superintelligence seems involved at some point in "all the plausible paths to a really great future".[7]
Public reception
Superintelligence: Paths, Dangers, Strategies became a New York Times Best Seller and received positive feedback from personalities such as Stephen Hawking, Bill Gates, Elon Musk, Peter Singer or Derek Parfit. It was praised for offering clear and compelling arguments on a neglected yet important topic. It was sometimes criticized for spreading pessimism about the potential of AI, or for focusing on longterm and speculative risks.[21] Some skeptics such as Daniel Denett or Oren Etzioni contended that superintelligence is too far away for the risk to be significant.[22][23] Yann LeCun considers that there is no existential risk, asserting that superintelligent AI will have no desire for self-preservation[24] and that experts can be trusted to make it safe.[25]
Raffi Khatchadourian wrote that Bostrom's book on superintelligence "is not intended as a treatise of deep originality; Bostrom's contribution is to impose the rigors of analytic philosophy on a messy corpus of ideas that emerged at the margins of academic thought."[21]
Digital sentience
Bostrom supports the substrate independence principle, the idea that consciousness can emerge on various types of physical substrates, not only in "carbon-based biological neural networks" like the human brain.[26] He considers that "sentience is a matter of degree"[27] and that digital minds can in theory be engineered to have a much higher rate and intensity of subjective experience than humans, using less resources. Such highly sentient machines, which he calls "super-beneficiaries", would be extremely efficient at achieving happiness. He recommends finding "paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive".[28]
Anthropic reasoning
Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[29]
Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that an anthropic theory is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and identifies how each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".
In later work, he has proposed the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[30] Bostrom claims events that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.
Simulation argument
Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:[31]
- The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
- The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
Ethics of human enhancement
Bostrom is favorably disposed toward "human enhancement", or "self-improvement and human perfectibility through the ethical application of science", as well as a critic of bio-conservative views.[32]
In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[32] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved with either of these organisations.
In 2005, Bostrom published the short story "The Fable of the Dragon-Tyrant" in the Journal of Medical Ethics. A shorter version was published in 2012 in Philosophy Now.[33] The fable personifies death as a dragon that demands a tribute of thousands of people every day. The story explores how status quo bias and learned helplessness can prevent people from taking action to defeat aging even when the means to do so are at their disposal. YouTuber CGP Grey created an animated version of the story.
With philosopher Toby Ord (currently a researcher at the Future of Humanity Institute), he proposed the reversal test in 2006. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[34]
Bostrom's work also considers potential dysgenic effects in human populations but he thinks genetic engineering can provide a solution and that "In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot".[35]
Technology strategy
Bostrom has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[36]
In 2011, Bostrom founded the Oxford Martin Program on the Impacts of Future Technology.[37]
Bostrom's theory of the Unilateralist's Curse has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[38]
Awards
Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential."[39] Prospect Magazine listed Bostrom in their 2014 list of the World's Top Thinkers.[40]
Public engagement
Bostrom has provided policy advice and consulted for many governments and organizations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[41] He is an advisory board member for the Machine Intelligence Research Institute,[42] Future of Life Institute,[43] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[44]
1996 email controversy
In January 2023, Bostrom issued an apology for a 1996 email where he had stated that he thought "Blacks are more stupid than whites", and where he also used the word "niggers" in a description of how he thought this statement might be perceived by others.[45] The apology, posted on his website,[46] stated that "the invocation of a racial slur was repulsive" and that he "completely repudiate[d] this disgusting email". In his apology, he wrote “I think it is deeply unfair that unequal access to education, nutrients and basic healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity.”[47]
In January 2023, Oxford University told The Daily Beast, “The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications.”[45] In August 2023, the investigation concluded that "we do not consider [Bostrom] to be a racist or that [he holds] racist views, and we consider that the apology [he] posted in January 2023 was sincere." [48]
Selected works
Books
- 2002 – Anthropic Bias: Observation Selection Effects in Science and Philosophy, ISBN 0-415-93858-9
- 2008 – Global Catastrophic Risks, edited by Bostrom and Milan M. Ćirković, ISBN 978-0-19-857050-9
- 2009 – Human Enhancement, edited by Bostrom and Julian Savulescu, ISBN 0-19-929972-2
- 2014 – Superintelligence: Paths, Dangers, Strategies, ISBN 978-0-19-967811-2
Journal articles
- Bostrom, Nick (1998). "How Long Before Superintelligence?". Journal of Future Studies. 2.
- — (January 2000). "Observer-relative chances in anthropic reasoning?". Erkenntnis. 52 (1): 93–108. doi:10.1023/A:1005551304409. JSTOR 20012969. S2CID 140474848.
- — (October 2001). "The Meta-Newcomb Problem". Analysis. 61 (4): 309–310. doi:10.1111/1467-8284.00310. JSTOR 3329010.
- — (March 2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology. 9 (1).
- — (April 2003). "Are You Living in a Computer Simulation?" (PDF). Philosophical Quarterly. 53 (211): 243–255. doi:10.1111/1467-9213.00309. JSTOR 3542867.
- — (2003). "The Mysteries of Self-Locating Belief and Anthropic Reasoning" (PDF). Harvard Review of Philosophy. 11 (Spring): 59–74. doi:10.5840/harvardreview20031114.
- — (November 2003). "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". Utilitas. 15 (3): 308–314. CiteSeerX 10.1.1.429.2849. doi:10.1017/S0953820800004076. S2CID 15860897.
- — (June 2005). "In Defense of Posthuman Dignity". Bioethics. 19 (3): 202–214. doi:10.1111/j.1467-8519.2005.00437.x. PMID 16167401.
- with Tegmark, Max (December 2005). "How Unlikely is a Doomsday Catastrophe?". Nature. 438 (7069): 754. arXiv:astro-ph/0512204. Bibcode:2005Natur.438..754T. doi:10.1038/438754a. PMID 16341005. S2CID 4390013.
- — (2006). "What is a Singleton?". Linguistic and Philosophical Investigations. 5 (2): 48–54.
- with Ord, Toby (July 2006). "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" (PDF). Ethics. 116 (4): 656–680. doi:10.1086/505233. PMID 17039628. S2CID 12861892.
- with Sandberg, Anders (December 2006). "Converging Cognitive Enhancements" (PDF). Annals of the New York Academy of Sciences. 1093 (1): 201–207. Bibcode:2006NYASA1093..201S. CiteSeerX 10.1.1.328.3853. doi:10.1196/annals.1382.015. PMID 17312260. S2CID 10135931.
- — (January 2008). "Drugs can be used to treat more than disease" (PDF). Nature. 452 (7178): 520. Bibcode:2008Natur.451..520B. doi:10.1038/451520b. PMID 18235476. S2CID 4426990.
- — (2008). "The doomsday argument". Think. 6 (17–18): 23–28. doi:10.1017/S1477175600002943. S2CID 171035249.
- — (2008). "Where Are They? Why I hope the search for extraterrestrial life finds nothing" (PDF). Technology Review (May/June): 72–77.
- with Sandberg, Anders (September 2009). "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" (PDF). Science and Engineering Ethics. 15 (3): 311–341. CiteSeerX 10.1.1.143.4686. doi:10.1007/s11948-009-9142-5. PMID 19543814. S2CID 6846531.
- — (2009). "Pascal's Mugging" (PDF). Analysis. 69 (3): 443–445. doi:10.1093/analys/anp062. JSTOR 40607655.
- with Ćirković, Milan; Sandberg, Anders (2010). "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Risk Analysis. 30 (10): 1495–1506. doi:10.1111/j.1539-6924.2010.01460.x. PMID 20626690. S2CID 6485564.
- — (2011). "Information Hazards: A Typology of Potential Harms from Knowledge" (PDF). Review of Contemporary Philosophy. 10: 44–79. ProQuest 920893069.
- Bostrom, Nick (2011). "THE ETHICS OF ARTIFICIAL INTELLIGENCE" (PDF). Cambridge Handbook of Artificial Intelligence. Archived from the original (PDF) on 4 March 2016. Retrieved 13 February 2017.
- Bostrom, Nick (2011). "Infinite Ethics" (PDF). Analysis and Metaphysics. 10: 9–59.
- — (May 2012). "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents" (PDF). Minds and Machines. 22 (2): 71–84. doi:10.1007/s11023-012-9281-3. S2CID 7445963.
- with Armstrong, Stuart; Sandberg, Anders (November 2012). "Thinking Inside the Box: Controlling and Using Oracle AI" (PDF). Minds and Machines. 22 (4): 299–324. CiteSeerX 10.1.1.396.799. doi:10.1007/s11023-012-9282-2. S2CID 9464769.
- — (February 2013). "Existential Risk Reduction as Global Priority". Global Policy. 4 (3): 15–31. doi:10.1111/1758-5899.12002.
- with Shulman, Carl (February 2014). "Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?" (PDF). Global Policy. 5 (1): 85–92. CiteSeerX 10.1.1.428.8837. doi:10.1111/1758-5899.12123.
- with Muehlhauser, Luke (2014). "Why we need friendly AI" (PDF). Think. 13 (36): 41–47. doi:10.1017/S1477175613000316. S2CID 143657841.
- Bostrom, Nick (September 2019). "The Vulnerable World Hypothesis". Global Policy. 10 (4): 455–476. doi:10.1111/1758-5899.12718.
See also
Notes
- ↑ Bostrom says that the risk can be reduced if society sufficiently exits what he calls a "semi-anarchic default condition", which roughly means limited capabilities for preventive policing and global governance, and having individuals with diverse motivations.[16]
- ↑ Bostrom notes that "the concept of a singleton is an abstract one: a singleton could be democracy, a tyranny, a single dominant AI, a strong set of global norms that include effective provisions for their own enforcement, or even an alien overlord—its defining characteristic being simply that it is some form of agency that can solve all major global coordination problems"[18]
References
- 1 2 3 4 5 6 Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention". The New Yorker. Vol. XCI, no. 37. pp. 64–79. ISSN 0028-792X.
- ↑ "Infinite Ethics" (PDF). nickbostrom.com. Retrieved 21 February 2019.
- ↑ "nickbostrom.com". Nickbostrom.com. Archived from the original on 30 August 2018. Retrieved 16 October 2014.
- 1 2 3 Shead, Sam (25 May 2020). "How Britain's oldest universities are trying to protect humanity from risky A.I." CNBC. Retrieved 5 June 2023.
- 1 2 "Nick Bostrom on artificial intelligence". Oxford University Press. 8 September 2014. Retrieved 4 March 2015.
- ↑ Times, The New York (8 September 2014). "Best Selling Science Books". The New York Times. Retrieved 19 February 2015.
- 1 2 "Nick Bostrom on the birth of superintelligence". Big Think. Retrieved 14 August 2023.
- 1 2 Thornhill, John (14 July 2016). "Artificial intelligence: can we control it?". Financial Times. Archived from the original on 10 December 2022. Retrieved 10 August 2016. (subscription required)
- ↑ Bostrom, Nick. "CV" (PDF).
- ↑ Bostrom, Nick (2000). Observational selection effects and probability (PhD). London School of Economics and Political Science. Retrieved 25 June 2021.
- 1 2 Andersen, Ross. "Omens". Aeon Media Ltd. Archived from the original on 18 October 2015. Retrieved 5 September 2015.
- ↑ Andersen, Ross (6 March 2012). "We're Underestimating the Risk of Human Extinction". The Atlantic. Retrieved 6 July 2023.
- ↑ Tegmark, Max; Bostrom, Nick (2005). "Astrophysics: is a doomsday catastrophe likely?" (PDF). Nature. 438 (7069): 754. Bibcode:2005Natur.438..754T. doi:10.1038/438754a. PMID 16341005. S2CID 4390013. Archived from the original (PDF) on 3 July 2011.
- ↑ Overbye, Dennis (3 August 2015). "The Flip Side of Optimism About Life on Other Planets". The New York Times. Retrieved 29 October 2015.
- ↑ Bostrom, Nick (2018). The Vulnerable World Hypothesis.
- ↑ Abhijeet, Katte (25 December 2018). "AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom". Analytics India Magazine.
- ↑ Piper, Kelsey (19 November 2018). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 5 July 2023.
- 1 2 3 4 5 6 Bostrom, Nick (2016). Superintelligence. Oxford University Press. pp. 98–111. ISBN 978-0-19-873983-8. OCLC 943145542.
- ↑ "Clever cogs". The Economist. ISSN 0013-0613. Retrieved 14 August 2023.
- 1 2 Bostrom, Nick (27 April 2015), What happens when our computers get smarter than we are?, retrieved 12 August 2023
- 1 2 Khatchadourian, Raffi (16 November 2015). "The Doomsday Invention". The New Yorker. Retrieved 13 August 2023.
- ↑ "Is Superintelligence Impossible? | Edge.org". www.edge.org. Retrieved 13 August 2023.
- ↑ Oren Etzioni (2016). "No, the Experts Don't Think Superintelligent AI is a Threat to Humanity". MIT Review.
- ↑ Arul, Akashdeep (27 January 2022). "Yann LeCun sparks a debate on AGI vs human-level AI". Analytics India Magazine. Retrieved 14 August 2023.
- ↑ "Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now—but 'A.I. godfather' says an existential threat is 'preposterously ridiculous'". Fortune. Retrieved 14 August 2023.
- ↑ "Are You Living in a Computer Simulation?". www.simulation-argument.com. Retrieved 5 July 2023.
- ↑ Jackson, Lauren (12 April 2023). "What if A.I. Sentience Is a Question of Degree?". The New York Times. ISSN 0362-4331. Retrieved 5 July 2023.
- ↑ Fisher, Richard. "The intelligent monster that you should let eat you". www.bbc.com. Retrieved 5 July 2023.
- ↑ Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy (PDF). New York: Routledge. pp. 44–58. ISBN 978-0-415-93858-7. Retrieved 22 July 2014.
- ↑ "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Nickbostrom.com. Retrieved 16 October 2014.
- ↑ Nesbit, Jeff. "Proof of the Simulation Argument". US News. Retrieved 17 March 2017.
- 1 2 Sutherland, John (9 May 2006). "The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings". The Guardian.
- ↑ Bostrom, Nick (12 June 2012). "The Fable of the Dragon-Tyrant". Philosophy Now. 89: 6–9.
- ↑ Bostrom, Nick; Ord, Toby (2006). "The reversal test: eliminating status quo bias in applied ethics" (PDF). Ethics. 116 (4): 656–679. doi:10.1086/505233. PMID 17039628. S2CID 12861892.
- ↑ "Existential Risks: Analyzing Human Extinction Scenarios". nickbostrom.com. Retrieved 6 July 2023.
- ↑ Bostrom, Nick (2002). "Existential Risks: Analyzing Human Extinction Scenarios". Journal of Evolution and Technology. Oxford Research Archive
- ↑ "Professor Nick Bostrom : People". Oxford Martin School. Archived from the original on 15 September 2018. Retrieved 16 October 2014.
- ↑ Lewis, Gregory (19 February 2018). "Horsepox synthesis: A case of the unilateralist's curse?". Bulletin of the Atomic Scientists. Archived from the original on 25 February 2018. Retrieved 26 February 2018.
- ↑ "The FP Top 100 Global Thinkers – 73. Nick Bostrom". Foreign Policy. 30 November 2009. Archived from the original on 21 October 2014.
- ↑ Kutchinsky, Serena (23 April 2014). "World thinkers 2014: The results". Prospect. Retrieved 19 June 2022.
- ↑ "Digital Skills Committee – timeline". UK Parliament. Retrieved 17 March 2017.
- ↑ "Team – Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved 17 March 2017.
- ↑ "Team – Future of Life Institute". Future of Life Institute. Retrieved 17 March 2017.
- ↑ McBain, Sophie (4 October 2014). "Apocalypse Soon: Meet The Scientists Preparing For the End Times". New Republic. Retrieved 17 March 2017.
- 1 2 Ladden-Hall, Dan (12 January 2023). "Top Oxford Philosopher Nick Bostrom Admits Writing 'Disgusting' N-Word Mass Email". The Daily Beast. Retrieved 12 January 2023.
- ↑ Bostrom, Nick. "Apology for old email" (PDF). nickbostrom.com. Retrieved 30 June 2023.
- ↑ Bilyard, Dylan (15 January 2023). "Investigation Launched into Oxford Don's Racist Email". The Oxford Blue.
- ↑ Bostrom, Nick. "Apology for old email" (PDF). nickbostrom.com. Retrieved 17 January 2024.