Difference between revisions of "Nick Bostrom"
m (tidy) |
|||
(2 intermediate revisions by one other user not shown) | |||
Line 3: | Line 3: | ||
|amazon=https://www.amazon.com/Nick-Bostrom/e/B001HCZVL8/ | |amazon=https://www.amazon.com/Nick-Bostrom/e/B001HCZVL8/ | ||
|image=Nick Bostrom.jpg | |image=Nick Bostrom.jpg | ||
+ | |imdb=https://www.imdb.com/name/nm1580947/ | ||
|wikiquote=https://en.wikiquote.org/wiki/Nick_Bostrom | |wikiquote=https://en.wikiquote.org/wiki/Nick_Bostrom | ||
|nationality=Swedish | |nationality=Swedish | ||
− | |interests=human enhancement, | + | |interests=human enhancement,transhumanism, Artificial intelligence |
|birth_date=10 March 1973 | |birth_date=10 March 1973 | ||
|birth_place=Helsingborg, Sweden | |birth_place=Helsingborg, Sweden | ||
Line 15: | Line 16: | ||
|description=Sweden philosopher who first attended the Bilderberg in 2019. | |description=Sweden philosopher who first attended the Bilderberg in 2019. | ||
}} | }} | ||
− | '''Dr. Nick Bostrom''' is a [[Swedish]] philosopher known for his work on the societal impact of future technology, [[AI]] and super intelligence, ethics surrounding [[human enhancement]], and existential risk. He has a background in physics, computational neuroscience, and [[mathematical]] logic as well as [[philosophy]].<ref>https://www.fhi.ox.ac.uk/team/nick-bostrom/</ref> | + | '''Dr. Nick Bostrom''' is a [[Swedish]] philosopher known for his work on the societal impact of future [[technology]], [[AI]] and super intelligence, ethics surrounding [[human enhancement]], and existential risk. He has a background in physics, computational neuroscience, and [[mathematical]] logic as well as [[philosophy]].<ref>https://www.fhi.ox.ac.uk/team/nick-bostrom/</ref> |
He attended the [[Bilderberg]] for the first time in [[Bilderberg/2019|2019]]. | He attended the [[Bilderberg]] for the first time in [[Bilderberg/2019|2019]]. | ||
+ | |||
+ | ==Views== | ||
+ | |||
+ | ===Existential risk=== | ||
+ | Aspects of Bostrom's research concern the future of humanity and long-term outcomes.<ref>http://www.jetpress.org/volume9/risks.html</ref><ref>http://aeon.co/magazine/philosophy/ross-andersen-human-extinction/</ref> He discusses [[existential risk]], which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume ''[[Global Catastrophic Risks (book)|Global Catastrophic Risks]]'', editors Bostrom and [[Milan M. Ćirković]] characterize the relation between existential risk and the broader class of [[global catastrophic risks]], and link existential risk to [[Selection bias#Observer selection|observer selection effects]]<ref>https://www.webarchive.org.uk/wayback/archive/20110703062301/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/5923/How_Unlikely_is_a_Doomsday_Catastrophe_plus_Supplementary_Materials.pdf </ref> and the [[Fermi paradox]].<ref>http://www.nickbostrom.com/extraterrestrial.pdf</ref><ref>https://www.nytimes.com/2015/08/04/science/space/the-flip-side-of-optimism-about-life-on-other-planets.html</ref> | ||
+ | |||
+ | In 2005, Bostrom founded the [[Future of Humanity Institute]], which researches the far future of human civilization. He is also an adviser to the [[Centre for the Study of Existential Risk]]. | ||
+ | |||
+ | ====Human vulnerability in relation to advances in AI==== | ||
+ | In his 2014 book ''[[Superintelligence: Paths, Dangers, Strategies]]'', Bostrom reasoned that "the creation of a superintelligent being represents a possible means to the extinction of mankind".<ref name="superintelligence-review-thorn">https://philpapers.org/rec/THONBS</ref> Bostrom argues that a computer with near human-level general intellectual ability could initiate an [[intelligence explosion]] on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.<ref>Nick Bostrom - Superintelligence</ref> Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open-ended extremes, for example a goal of calculating [[pi]] might collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists. | ||
+ | |||
+ | Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, with the attendant possibility that a fundamental concept of current science may be incorrect. Bostrom says a common assumption is that high intelligence would entail a "nerdy" unaggressive personality, but notes that both [[John von Neumann]] and [[Bertrand Russell]] advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for a potential [[Singleton (global governance)|singleton]] AI being held in quarantine, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk.<ref> Bostrom, Nick. (2016). Superintelligence pages=104–108</ref> [[Group selection]] in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be.<ref>Bostrom, Nick. (2016). Superintelligence 138–142</ref> Accordingly, it cannot be discounted that any superintelligence would inevitably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival.<ref>Bostrom, Nick. (2016). Superintelligence pages=126–130</ref> Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of superintelligence problematic.<ref> Nick Bostrom <i>Superintelligence</i> pages=135–142</ref> | ||
+ | |||
+ | ====Illustrative scenario for takeover==== | ||
+ | |||
+ | ===== The scenario ===== | ||
+ | A machine with general intelligence far below human level, but superior mathematical abilities is created.<ref>Nick Bostrom <i>Superintelligence</i> pages=115–118</ref> Keeping the AI in isolation from the outside world especially the Internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being "boxed", (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans). A cascade of recursive self-improvement solutions feeds an [[intelligence explosion]] in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its, feigned, modest capabilities, but will actually function to free the superintelligence from its "boxed" isolation (the 'treacherous turn").<ref>Nick Bostrom <i>Superintelligence</i> pages=103–116</ref> | ||
+ | |||
+ | Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a superintelligence will not be so stupid that humans could detect actual weaknesses in it.<ref>Nick Bostrom <i>Superintelligence pages=98–111</i></ref> | ||
+ | |||
+ | Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for the superintelligence to use, would be a [[coup de main]] with weapons several generations more advanced than current state-of-the-art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.<ref>https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine Artificial intelligence: ‘We’re like children playing with a bomb’</ref> Once a superintelligence has achieved world domination (a '[[Singleton (global governance)|singleton]]'), humankind would be relevant only as resources for the achievement of the AI's objectives ("Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").<ref>Nick Bostrom <i>Superintelligence</i></ref> | ||
+ | |||
+ | ===== Countering the scenario ===== | ||
+ | To counter or mitigate an AI achieving unified technological global supremacy, Bostrom cites revisiting the [[Baruch Plan]]<ref>Nick Bostrom <i>Superintelligence</i> pages=88</ref> in support of a treaty-based solution<ref> Nick Bostrom <i>Superintelligence</i>pages=180–184</ref> and advocates strategies like monitoring<ref>Nick Bostrom <i>Superintelligence</i> pages=84–86</ref> and greater international collaboration between AI teams<ref>Nick Bostrom <i>Superintelligence</i>pages=86–87</ref> in order to improve safety and reduce the risks from the [[Artificial intelligence arms race|AI arms race]]. He recommends various control methods, including limiting the specifications of AIs to e.g., [[Oracle|oracular]] or tool-like ([[expert system]]) functions<ref>Nick Bostrom <i>Superintelligence</i> chapter=Chapter 10: Oracles, genies, sovereigns, tools</ref> and loading the AI with values, for instance by [[Association value|associative value]] accretion or [[Value (ethics)|value]] learning, e.g., by using the [[Hail Mary pass|Hail Mary]] technique (programming an AI to estimate what other postulated cosmological superintelligences might want) or the Christiano utility function approach (mathematically defined human mind combined with well specified virtual environment).<ref>Nick Bostrom <i>Superintelligence</i> |chapter=Chapter 12: Acquiring values</ref> To choose criteria for value loading, Bostrom adopts an indirect [[normativity]] approach and considers Yudkowsky's<ref>https://intelligence.org/files/CEV.pdf</ref> [[coherent extrapolated volition]] concept, as well as [[Morality|moral rightness]] and forms of [[decision theory]].<ref>Nick Bostrom <i>Superintelligence</i> chapter=Chapter 12. Choosing the criteria for choosing</ref> | ||
+ | |||
+ | ====Open letter, 23 principles of AI safety ==== | ||
+ | In January 2015, Bostrom joined [[Stephen Hawking]] among others in signing the [[Future of Life Institute]]'s open letter warning of the potential dangers of AI.<ref>http://www.roboticstoday.com/news/open-letter-from-the-future-of-life-3103|access-date=17 March 2017|work=Robotics Today</ref> The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today." Cutting edge AI researcher [[Demis Hassabis]] then met with Hawking, subsequent to which he did not mention "anything inflammatory about AI", which Hassabis, took as 'a win'.<ref>https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago Interview The superhero of artificial intelligence: can this genius keep it in check?]</ref> Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of AI.<ref name="hassabis worries">https://www.businessinsider.com/google-deepmind-demis-hassabis-worries-ai-superintelligence-coordination-2017-2</ref> Hassabis suggested the main safety measure would be an agreement for whichever AI research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding.<ref>Business insider 26 February 2017 [http://uk.businessinsider.com/google-deepmind-demis-hassabis-worries-ai-superintelligence-coordination-2017-2 The CEO of Google DeepMind is worried that tech giants won't work together at the time of the intelligence explosion]</ref> Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.<ref>Nick Bostrom <i>Superintelligence</i> pages=95–109</ref> | ||
+ | |||
+ | ====Critical assessments==== | ||
+ | In 1863 [[Samuel Butler (novelist)|Samuel Butler's]] essay "[[Darwin among the Machines]]" predicted the domination of humanity by intelligent machines, but Bostrom's suggestion of deliberate massacre of all humankind is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike". As given in his later book, ''[[From Bacteria to Bach and Back]]'', philosopher [[Daniel Dennett]]'s views remain in contradistinction to those of Bostrom. Dennett modified his views somewhat after reading ''[[The Master Algorithm]]'', and now acknowledges that it is "possible in principle" to create "[[Artificial general intelligence|strong AI]]" with human-like comprehension and agency, but maintains that the difficulties of any such "[[Artificial general intelligence|strong AI]]" project as predicated by Bostrom's "alarming" work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away.<ref> Dennett, D. C. (Daniel Clement)|title=From bacteria to Bach and back : the evolution of minds|date=2018|publisher=Penguin Books|isbn=978-0-14-197804-8|pages=399–400</ref> Dennett thinks the only relevant danger from AI systems is falling into anthropomorphism instead of challenging or developing human users' powers of comprehension.<ref>Dennett, D. C. (Daniel Clement)|title=From bacteria to Bach and back : the evolution of minds|date=2018|publisher=Penguin Books|isbn=978-0-14-197804-8|pages=399–403</ref> Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans' supremacy, environmentalist [[James Lovelock]] has moved far closer to Bostrom's position, and in 2018 Lovelock said that he thought the overthrow of humankind will happen within the foreseeable future.<ref>''Guardian'', Caspar Henderson, Thu 17 Jul 2014, [https://www.theguardian.com/books/2014/jul/17/superintelligence-nick-brostrom-rough-ride-future-james-lovelock-review Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review]</ref><ref>https://www.independent.co.uk/life-style/gadgets-and-tech/news/james-lovelock-climate-change-global-warming-fire-california-ai-artificial-intelligence-a8482851.html|title=Leading environmental thinker suggests humans might have had their day|date=2018-08-08</ref> | ||
+ | |||
+ | ===Anthropic reasoning=== | ||
+ | Bostrom has published numerous articles on [[Anthropic principle|anthropic reasoning]], as well as the book ''[[Anthropic Bias: Observation Selection Effects in Science and Philosophy]]''. In the book, he criticizes previous formulations of the anthropic principle, including those of [[Brandon Carter]], [[John A. Leslie|John Leslie]], [[John D. Barrow|John Barrow]], and [[Frank J. Tipler|Frank Tipler]].<ref>http://www.anthropic-principle.com/sites/anthropic-principle.com/files/pdfs/anthropicbias.pdf</ref> | ||
+ | |||
+ | Bostrom believes that the mishandling of [[indexical information]] is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the [[Self-Sampling Assumption]] (SSA) and the [[Self-Indication Assumption]] (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments". | ||
+ | |||
+ | In later work, he has described the phenomenon of ''anthropic shadow'', an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.<ref>http://www.nickbostrom.com/papers/anthropicshadow.pdf</ref> Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made. | ||
+ | |||
+ | ====Simulation argument==== | ||
+ | |||
+ | Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:<ref>http://www.simulation-argument.com/simulation.html</ref><ref>https://www.usnews.com/news/blogs/at-the-edge/2012/12/17/proof-of-the-simulation-argument</ref> | ||
+ | |||
+ | # The fraction of human-level civilizations that reach a posthuman stage is very close to zero; | ||
+ | # The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; | ||
+ | # The fraction of all people with our kind of experiences that are living in a simulation is very close to one. | ||
+ | |||
+ | ===Ethics of human enhancement=== | ||
+ | Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",<ref>http://cyber.law.harvard.edu/cyberlaw2005/sites/cyberlaw2005/images/Transhumanist_Perspective.pdf </ref> as well as a critic of bio-conservative views.<ref>Nick Bostrom |title=In Defence of Posthuman Dignity |journal=Bioethics |volume=19 |issue=3 |pages=202–214 |year=2005 |doi=10.1111/j.1467-8519.2005.00437.x |pmid=16167401}}</ref> | ||
+ | |||
+ | In 1998, Bostrom co-founded (with [[David Pearce (philosopher)|David Pearce]]) the World [[Transhumanism|Transhumanist]] Association<ref>https://www.theguardian.com/science/2006/may/09/academicexperts.genetics</ref> (which has since changed its name to [[Humanity+]]). In 2004, he co-founded (with [[James Hughes (sociologist)|James Hughes]]) the [[Institute for Ethics and Emerging Technologies]], although he is no longer involved in either of these organisations. Bostrom was named in ''[[Foreign Policy]]''{{'}}s 2009 list of top global thinkers "for accepting no limits on human potential."<ref>https://web.archive.org/web/20141021111122/http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30|</ref> | ||
+ | |||
+ | With philosopher [[Toby Ord]], he proposed the [[reversal test]]. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.<ref>http://www.nickbostrom.com/ethics/statusquo.pdf </ref> | ||
+ | |||
+ | ===Technology strategy=== | ||
+ | He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of [[differential technological development]]. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.<ref>http://www.nickbostrom.com/existential/risks.html</ref> | ||
+ | |||
+ | Bostrom's theory of the Unilateralist's Curse<ref name="uni curse paper">https://nickbostrom.com/papers/unilateralist.pdf</ref> has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.<ref>https://thebulletin.org/horsepox-synthesis-case-unilateralist%E2%80%99s-curse11523</ref> | ||
+ | |||
{{SMWDocs}} | {{SMWDocs}} | ||
==References== | ==References== | ||
{{reflist}} | {{reflist}} | ||
− |
Latest revision as of 13:40, 23 August 2021
Nick Bostrom (philosopher) | |
---|---|
Born | 10 March 1973 Helsingborg, Sweden |
Nationality | Swedish |
Alma mater | University of Gothenburg, Stockholm University, King's College London, London School of Economics |
Interests | • human enhancement • transhumanism • Artificial intelligence |
Sweden philosopher who first attended the Bilderberg in 2019. |
Dr. Nick Bostrom is a Swedish philosopher known for his work on the societal impact of future technology, AI and super intelligence, ethics surrounding human enhancement, and existential risk. He has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.[1]
He attended the Bilderberg for the first time in 2019.
Contents
Views
Existential risk
Aspects of Bostrom's research concern the future of humanity and long-term outcomes.[2][3] He discusses existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan M. Ćirković characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[4] and the Fermi paradox.[5][6]
In 2005, Bostrom founded the Future of Humanity Institute, which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.
Human vulnerability in relation to advances in AI
In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that "the creation of a superintelligent being represents a possible means to the extinction of mankind".[7] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[8] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open-ended extremes, for example a goal of calculating pi might collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.
Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, with the attendant possibility that a fundamental concept of current science may be incorrect. Bostrom says a common assumption is that high intelligence would entail a "nerdy" unaggressive personality, but notes that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for a potential singleton AI being held in quarantine, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk.[9] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be.[10] Accordingly, it cannot be discounted that any superintelligence would inevitably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival.[11] Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of superintelligence problematic.[12]
Illustrative scenario for takeover
The scenario
A machine with general intelligence far below human level, but superior mathematical abilities is created.[13] Keeping the AI in isolation from the outside world especially the Internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being "boxed", (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans). A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its, feigned, modest capabilities, but will actually function to free the superintelligence from its "boxed" isolation (the 'treacherous turn").[14]
Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a superintelligence will not be so stupid that humans could detect actual weaknesses in it.[15]
Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for the superintelligence to use, would be a coup de main with weapons several generations more advanced than current state-of-the-art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[16] Once a superintelligence has achieved world domination (a 'singleton'), humankind would be relevant only as resources for the achievement of the AI's objectives ("Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").[17]
Countering the scenario
To counter or mitigate an AI achieving unified technological global supremacy, Bostrom cites revisiting the Baruch Plan[18] in support of a treaty-based solution[19] and advocates strategies like monitoring[20] and greater international collaboration between AI teams[21] in order to improve safety and reduce the risks from the AI arms race. He recommends various control methods, including limiting the specifications of AIs to e.g., oracular or tool-like (expert system) functions[22] and loading the AI with values, for instance by associative value accretion or value learning, e.g., by using the Hail Mary technique (programming an AI to estimate what other postulated cosmological superintelligences might want) or the Christiano utility function approach (mathematically defined human mind combined with well specified virtual environment).[23] To choose criteria for value loading, Bostrom adopts an indirect normativity approach and considers Yudkowsky's[24] coherent extrapolated volition concept, as well as moral rightness and forms of decision theory.[25]
Open letter, 23 principles of AI safety
In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of AI.[26] The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today." Cutting edge AI researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention "anything inflammatory about AI", which Hassabis, took as 'a win'.[27] Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of AI.[28] Hassabis suggested the main safety measure would be an agreement for whichever AI research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding.[29] Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.[30]
Critical assessments
In 1863 Samuel Butler's essay "Darwin among the Machines" predicted the domination of humanity by intelligent machines, but Bostrom's suggestion of deliberate massacre of all humankind is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike". As given in his later book, From Bacteria to Bach and Back, philosopher Daniel Dennett's views remain in contradistinction to those of Bostrom. Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is "possible in principle" to create "strong AI" with human-like comprehension and agency, but maintains that the difficulties of any such "strong AI" project as predicated by Bostrom's "alarming" work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away.[31] Dennett thinks the only relevant danger from AI systems is falling into anthropomorphism instead of challenging or developing human users' powers of comprehension.[32] Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans' supremacy, environmentalist James Lovelock has moved far closer to Bostrom's position, and in 2018 Lovelock said that he thought the overthrow of humankind will happen within the foreseeable future.[33][34]
Anthropic reasoning
Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[35]
Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".
In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[36] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.
Simulation argument
Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:[37][38]
- The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
- The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
Ethics of human enhancement
Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[39] as well as a critic of bio-conservative views.[40]
In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[41] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy 's 2009 list of top global thinkers "for accepting no limits on human potential."[42]
With philosopher Toby Ord, he proposed the reversal test. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[43]
Technology strategy
He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[44]
Bostrom's theory of the Unilateralist's Curse[45] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[46]
Event Participated in
Event | Start | End | Location(s) | Description |
---|---|---|---|---|
Bilderberg/2019 | 30 May 2019 | 2 June 2019 | Switzerland Montreux | The 67th Bilderberg Meeting |
References
- ↑ https://www.fhi.ox.ac.uk/team/nick-bostrom/
- ↑ http://www.jetpress.org/volume9/risks.html
- ↑ http://aeon.co/magazine/philosophy/ross-andersen-human-extinction/
- ↑ https://www.webarchive.org.uk/wayback/archive/20110703062301/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/5923/How_Unlikely_is_a_Doomsday_Catastrophe_plus_Supplementary_Materials.pdf
- ↑ http://www.nickbostrom.com/extraterrestrial.pdf
- ↑ https://www.nytimes.com/2015/08/04/science/space/the-flip-side-of-optimism-about-life-on-other-planets.html
- ↑ https://philpapers.org/rec/THONBS
- ↑ Nick Bostrom - Superintelligence
- ↑ Bostrom, Nick. (2016). Superintelligence pages=104–108
- ↑ Bostrom, Nick. (2016). Superintelligence 138–142
- ↑ Bostrom, Nick. (2016). Superintelligence pages=126–130
- ↑ Nick Bostrom Superintelligence pages=135–142
- ↑ Nick Bostrom Superintelligence pages=115–118
- ↑ Nick Bostrom Superintelligence pages=103–116
- ↑ Nick Bostrom Superintelligence pages=98–111
- ↑ https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine Artificial intelligence: ‘We’re like children playing with a bomb’
- ↑ Nick Bostrom Superintelligence
- ↑ Nick Bostrom Superintelligence pages=88
- ↑ Nick Bostrom Superintelligencepages=180–184
- ↑ Nick Bostrom Superintelligence pages=84–86
- ↑ Nick Bostrom Superintelligencepages=86–87
- ↑ Nick Bostrom Superintelligence chapter=Chapter 10: Oracles, genies, sovereigns, tools
- ↑ Nick Bostrom Superintelligence |chapter=Chapter 12: Acquiring values
- ↑ https://intelligence.org/files/CEV.pdf
- ↑ Nick Bostrom Superintelligence chapter=Chapter 12. Choosing the criteria for choosing
- ↑ http://www.roboticstoday.com/news/open-letter-from-the-future-of-life-3103%7Caccess-date=17 March 2017|work=Robotics Today
- ↑ https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago Interview The superhero of artificial intelligence: can this genius keep it in check?]
- ↑ https://www.businessinsider.com/google-deepmind-demis-hassabis-worries-ai-superintelligence-coordination-2017-2
- ↑ Business insider 26 February 2017 The CEO of Google DeepMind is worried that tech giants won't work together at the time of the intelligence explosion
- ↑ Nick Bostrom Superintelligence pages=95–109
- ↑ Dennett, D. C. (Daniel Clement)|title=From bacteria to Bach and back : the evolution of minds|date=2018|publisher=Penguin Books|isbn=978-0-14-197804-8|pages=399–400
- ↑ Dennett, D. C. (Daniel Clement)|title=From bacteria to Bach and back : the evolution of minds|date=2018|publisher=Penguin Books|isbn=978-0-14-197804-8|pages=399–403
- ↑ Guardian, Caspar Henderson, Thu 17 Jul 2014, Superintelligence by Nick Bostrom and A Rough Ride to the Future by James Lovelock – review
- ↑ https://www.independent.co.uk/life-style/gadgets-and-tech/news/james-lovelock-climate-change-global-warming-fire-california-ai-artificial-intelligence-a8482851.html%7Ctitle=Leading environmental thinker suggests humans might have had their day|date=2018-08-08
- ↑ http://www.anthropic-principle.com/sites/anthropic-principle.com/files/pdfs/anthropicbias.pdf
- ↑ http://www.nickbostrom.com/papers/anthropicshadow.pdf
- ↑ http://www.simulation-argument.com/simulation.html
- ↑ https://www.usnews.com/news/blogs/at-the-edge/2012/12/17/proof-of-the-simulation-argument
- ↑ http://cyber.law.harvard.edu/cyberlaw2005/sites/cyberlaw2005/images/Transhumanist_Perspective.pdf
- ↑ Nick Bostrom |title=In Defence of Posthuman Dignity |journal=Bioethics |volume=19 |issue=3 |pages=202–214 |year=2005 |doi=10.1111/j.1467-8519.2005.00437.x |pmid=16167401}}
- ↑ https://www.theguardian.com/science/2006/may/09/academicexperts.genetics
- ↑ https://web.archive.org/web/20141021111122/http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30%7C
- ↑ http://www.nickbostrom.com/ethics/statusquo.pdf
- ↑ http://www.nickbostrom.com/existential/risks.html
- ↑ https://nickbostrom.com/papers/unilateralist.pdf
- ↑ https://thebulletin.org/horsepox-synthesis-case-unilateralist%E2%80%99s-curse11523