• Loading stock data...
Technology

Philosophical Debates About AI Risks Are a Distraction

Recent internal turmoil at OpenAI, one of the most closely watched organizations in AI development, has brought an ideological battle into the mainstream. Clear details about why CEO Sam Altman was fired, then rehired, haven’t emerged yet, but one debate at the company is thought to be a contributor: the battle between effective altruism and effective acceleration.

On one side, adherents of effective altruism—or EA—believe that AI potentially represents an existential threat to humanity, and must be among the highest priorities for regulators. On the other side, believers in effective acceleration—often shorthanded as e/acc—believe an AI revolution is inevitable and must be embraced. Each side believes the other is dangerous, and even cultish.

Neither position is useful for crafting public policy.

AI has potentially significant implications for the workforce, public health, and the creation and spread of disinformation, among other avenues. Careful, objective study can help policymakers and the public understand how to harness the benefits of AI while also mitigating its risks. But the EA versus e/acc debate is not a useful framework for such analyses, because while the two positions may be opposed, both start from assumptions rather than questions.

The e/acc position, for example, views emerging technologies like AI as an inherent force that, on average, will inevitably bring more benefits than threats to humanity. This view downplays the long history of unintended risks that stem from new technologies. Social media, for example, made it easier for us to connect with our friends and family, but it also made it easier for foreign adversaries to amplify falsehoods and extremists to promote violence online.

Careful, objective study can help policymakers and the public understand how to harness the benefits of AI while also mitigating its risks.

On the flip side, effective altruists claim that artificial general intelligence may cause “significant harm” to humanity. But that claim, at least so far, remains unsubstantiated. Which raises the question: What is the evidence that AI could indeed significantly harm humanity? Thought experiments such as the “paperclip maximizer,” which highlights the potential danger of a super-intelligent AI system, suggest that even innocent applications of AI could result in mass devastation. But such hypotheticals do not reflect the many ways that AI is used in the real world right now. They are nothing more than sensationalized marketing—a way to make AI seem more important and destructive than it already is. The time the public and regulators spend talking about fanciful scenarios ends up distracting from what is important and destructive about AI in the present.

Decisionmakers should be gathering hard evidence before drawing conclusions. Without it, they run the risk of solving purely theoretical problems that may not even need solving. There is clear evidence of problems from AI in the here and now, including deaths involving automated driving systems, labor relations, warfighting, child pornography, racial bias, and the spread of falsehoods online. How should society prioritize these problems against longer-term threats from AI?

Smart, successful, evidence-based policymaking would start with a question such as: Without further government intervention, what are the long-term risks of this specific form or use of AI? As with all scientific research, answering such a question requires a dispassionate approach, drawing lessons from history as well as new inquiry. Those inquiries give rise to frameworks and theories that evolve over time as new evidence arises. No answer should be assumed ex-ante to be correct; and no answer should be taken as an ex-post, eternal truth.

Nuclear weapons offer an illustrative historical comparison. Much like AI, nukes have generated rhetoric regarding their potential threats and use by bad actors. To be sure, the destructive potential of atomic bombs is horrifically clear, whereas artificial general intelligence is not yet even a reality, much less a proven risk to humanity. But the resulting policy debate after the detonation of the first nuclear bomb centered on how the technology can be used, and how its worst use cases can be avoided.

To answer these questions, researchers—at RAND, where we both now work—developed new frameworks for understanding human psychology in high-stakes strategic settings. The logic was to understand first how nuclearized nations might act in different settings; and then, to devise policies and regulatory incentives that would result in the best of all possible worlds, specifically a world in which countries did not use nuclear weapons.

Decisionmakers should be gathering hard evidence before drawing conclusions. Without it, they run the risk of solving purely theoretical problems that may not even need solving.

One of the frameworks researchers developed to think about how best to craft policy and regulate nuclear arms and power was called game theory. The prisoner’s dilemma is a famous scenario within that framework. The dilemma helps explain how individuals will act when they can’t be sure whether others are trustworthy. Today, it is often assumed that the prisoner’s dilemma was developed to model the Cold War, and that this scenario predicted the principle of mutually assured destruction. In this historical retelling, game theorists took an analogous stance to the EA and e/acc crowds. They assumed MAD as an inevitability, and treated it as such in their models.

In fact, that telling of history is false. The prisoner’s dilemma arose as an experiment at RAND to test whether a particular outcome (the Nash equilibrium, in which no player can singlehandedly improve his or her own payoff) predicted real-world human behavior. Only later did researchers start using game theory to model nuclear relations, and only then did the Nash equilibrium become interpreted as a scenario of mutually-assured destruction. In other words, game theorists started from a point of trying to understand the world in a way that was centered on human behavior and psychology. And only after they grounded things in the way the world—and people in it—worked, did researchers realize that what they were learning about human behavior and psychology could then be used to address high-stakes, real-world policy problems around nuclear arms.

Back then, game theorists adopted the logic of evidence-based policy, starting not from assumptions, but from questions. AI researchers today must do the same. Otherwise, the esoteric philosophical debate about the supposed true nature of AI will sideline the search for evidence to solve the very real problems afflicting society today. The result might be self-fulfilling prophecies that seemingly prioritize far-fetched AI threats over extant threats like child exploitation, car accidents, and online disinformation.


James Marrone is an economist and Marek N. Posard is a sociologist, both work at RAND, and both are affiliate faculty members at the Pardee RAND Graduate School.

This commentary originally appeared on Barron’s on December 15, 2023. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.

Source : rand.org

GLOBAL BUSINESS AND FINANCE MAGAZINE

GLOBAL BUSINESS AND FINANCE MAGAZINE

About Author

Leave a comment

Your email address will not be published. Required fields are marked *

You may also like

Technology

Has the Digital Markets Act got it wrong on app stores?

Apple’s iPhone and Google’s Android mobile operating system dominate the smartphone market. The two companies also control the app stores
Business Technology

How to fix the European Union’s proposed Data Act

The draft European Union Data Act, proposed by the European Commission in February 2022, aims to fill a big gap in