For the Love of God, Stop Making Inscrutable Doomsday Clocks - 5 minutes read







A Saudi-backed business school in Switzerland has launched a Doomsday Clock to warn the world about the harms of “uncontrolled artificial general intelligence,” what it calls a “god-like” AI. Imagine if the people selling offices on Excel spreadsheets in the 1980s tried to tell workers that the software was a pathway to birthing a god and used a ticking Rolex to do it and you’ll have an idea of what we’re dealing with here.


Michael Wade—the clock’s creator and a TONOMUS Professor of Strategy and Digital at IMD Business School in Lausanne, Switzerland, and Director of the TONOMUS Global Center for Digital and AI Transformation (good lord)—unveiled the clock in a recent op-ed for TIME.

A clock ticking down to midnight is a once-powerful and now stale metaphor from the atomic age. It’s an image so old and stayed that it just celebrated its 75th anniversary. After America dropped nukes on Japan, some researchers and scientists who’d worked on developing the weapon formed the Bulletin of the Atomic Scientists.

Their project has been to warn the world of its impending destruction. The Doomsday Clock is one of the ways they do it. Every year experts in various fields—from nuclear weapons to climate change to, yes, artificial intelligence—gather and discuss just how screwed the world is. Then they set the clock. The closer to midnight, the closer humanity is to its doom. Right now it’s at 90 seconds to midnight, the closest the clock has ever been set.

Wade and IMD have no relation to the Bulletin of the Atomic Scientists and the Doomsday Clock is its own thing. Wade’s creation is the AI Safety Clock. “The Clock’s current reading—29 minutes to midnight—is a measure of just how close we are to the critical tipping point where uncontrolled AGI could bring about existential risks,” he said in his Time article. “While no catastrophic harm has happened yet, the breakneck speed of AI development and the complexities of regulation mean that all stakeholders must stay alert and engaged.”


Silicon Valley’s loudest AI proponents love to lean into the nuclear metaphor. OpenAI CEO Sam Altman compared the work of his company to the Manhattan Project. Senator Edward J. Markey (D-MA) wrote that America’s rush to embrace AI is similar to Oppenheimer’s pursuit of the atomic bomb. Some of this fear and concern might be genuine but it’s all marketing at the end of the day.

We’re in the middle of a hype cycle around AI. Companies are promising it can deliver unprecedented returns and destroy labor costs. Machines, they say, will soon do everything for us. The reality is that AI is useful but also mostly moves labor and production costs to other parts of the chain where the end user doesn’t see it.


The fear of AI becoming so advanced that it wipes humanity out is just another kind of hype. Doomerism about word calculators and predictive modeling systems is just another way to get people excited about the possibilities of this technology and mask the real harm it creates.

At a recent Tesla event, robot bartenders poured drinks for attendees. By all appearances, they were controlled remotely by humans. LLMs burn a ton of water and electricity when coming up with answers and often rely on the subtle and constant attention of human “trainers” who labor in poor countries and work for a pittance. Humans use the tech to flood the internet with non-consensually created nude images of other humans. These are just a few of the real-world harms already caused by Silicon’s rapid embrace of AI.


And as long as you’re afraid of Skynet coming to life and wiping out humanity in the future, you aren’t paying attention to the problems right in front of you. The Bulletin’s Doomsday Clock may be inscrutable on the surface, but there’s an army of impressive minds behind the metaphor churning out work every day about the real risks of nuclear weapons and new technologies.

In September, the Bulletin put a picture of Altman in an article debunking hyperbolic claims about how AI might be used to engineer new bioweapons. “For all the doomsaying, there are actually many uncertainties in how AI will affect bioweapons and the wider biosecurity arena,” the article said.


It also stressed that talk of extreme scenarios around AI helps people avoid having more difficult conversations. “The challenge, as it has been for more than two decades, is to avoid apathy and hyperbole about scientific and technological developments that impact biological disarmament and efforts to keep biological weapons out of the war plans and arsenals of violent actors,” the Bulletin said. “Debates about AI absorb high-level and community attention and … they risk an overly narrow threat focus that loses sight of other risks and opportunities.”

There are dozens of articles like this published every year by the people who run the Doomsday Clock. The Swiss AI clock has no such scientific backing, though it claims to be monitoring such articles in its FAQ.


What it has, instead, is money from Saudi Arabia. Wade’s position at the school is possible thanks to funding from TONOMUS, a subsidiary of NEOM. NEOM is Saudi Arabia’s much-hyped city of the future that it’s attempting to build in the desert. Among the other promises of NEOM are robot dinosaurs, flying cars, and a giant artificial moon.

You’ll forgive me if I don’t take Wade or the AI Safety Clock seriously.




Source: Gizmodo.com

Powered by NewsAPI.org