AI doomers are a ‘cult’
Andreessen Horowitz partner Marc Andreessen Justin Sullivan | Getty Images
Venture capitalist Marc Andreessen is known for saying that "software is eating the world." When it comes to artificial intelligence, he claims people should stop worrying and build, build, build. On Tuesday, Andreessen published a nearly 7,000-word missive on his views on AI, the risks it poses and the regulation he believes it requires. In trying to counteract all the recent talk of "AI doomerism," he presents what could be seen as an overly idealistic perspective of the implications.
'Doesn't want to kill you'
Andreessen starts off with an accurate take on AI, or machine learning, calling it "the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it." AI isn't sentient, he says, despite the fact that its ability to mimic human language can understandably fool some into believing otherwise. It's trained on human language and finds high-level patterns in that data. "AI doesn't want, it doesn't have goals, it doesn't want to kill you, because it's not alive," he wrote. "And AI is a machine – is not going to come alive any more than your toaster will." Andreessen writes that there's a "wall of fear-mongering and doomerism" in the AI world right now. Without naming names, he's likely referring to claims from high-profile tech leaders that the technology poses an existential threat to humanity. Last week, Microsoft founder Bill Gates, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis and others signed a letter from the Center for AI Safety about "the risk of extinction from AI."
watch now
Tech CEOs are motivated to promote such doomsday views because they "stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition," Andreessen wrote. Many AI researchers and ethicists have also criticized the doomsday narrative. One argument is that too much focus on AI's growing power and its future threats distracts from real-life harms that some algorithms cause to marginalized communities right now, rather than in an unspecified future. But that's where most of the similarities between Andreessen and the researchers end. Andreessen writes that people in roles like AI safety expert, AI ethicist and AI risk researcher "are paid to be doomers, and their statements should be processed appropriately," he wrote. In actuality, many leaders in the AI research, ethics and trust and safety community have voiced clear opposition to the doomer agenda and instead focus on mitigating today's documented risks of the technology. Instead of acknowledging any documented real-life risks of AI – its biases can infect facial recognition systems, bail decisions, criminal justice proceedings, mortgage approval algorithms and more – Andreessen claims AI could be "a way to make everything we care about better." He argues that AI has huge potential for productivity, scientific breakthroughs, creative arts and reducing wartime death rates. "Anything that people do with their natural intelligence today can be done much better with AI," he wrote. "And we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel."
From doomerism to idealism
Though AI has made significant strides in many areas, such as vaccine development and chatbot services, the technology's documented harms has led many experts to conclude that, for certain applications, it should never be used. Andreessen describes these fears as irrational "moral panic." He also promotes reverting to the tech industry's "move fast and break things" approach of yesteryear, writing that both big AI companies and startups "should be allowed to build AI as fast and aggressively as they can" and that the tech "will accelerate very quickly from here – if we let it." Andreessen, who gained prominence in the 1990s for developing the first popular internet browser, started his venture firm with Ben Horowitz in 2009. Two years later, he wrote an oft-cited blog post titled "Why software is eating the world," which said that health care and education were due for "fundamental software-based transformation" just as so many industries before them. Eating the world is exactly what many people fear when it comes to AI. Beyond just trying to tamp down those concerns, Andreessen says there's work to be done. He encourages the controversial use of AI itself to protect people against AI bias and harms. "Governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society's defensive capabilities," he said. In Andreessen's own idealist future, "every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful." He expresses similar visions for AI's role as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander.
Is China the real threat?
Source: CNBC