OpenAI’s warnings about risky AI are mostly just marketing

A powerful new AI called o1 is the most dangerous that OpenAI has ever released, the firm claims – but who are these warnings for, asks Chris Stokel-Walker.

OpenAI has announced a new AI model, esoterically dubbed o1, that it describes as even more capable than anything that has come before – and even more dangerous. But before you start worrying about the machine apocalypse, it is worth thinking about what purpose such warnings serve.

OpenAI CEO Sam Altman has warned about the dangers of AI
Chona Kasinger/Bloomberg via Getty Images


While previous models such as GPT-4 were considered “low” risk for public release, according to OpenAI’s in-house framework, its new o1 model is the first to qualify as “medium” risk on half the criteria. OpenAI bases these ratings on tests in which people deliberately try to make an AI break rules in an effort to foresee what “bad actors” may try when the model is publicly released. Its policy is to only release models with a medium risk rating or lower – suggesting that o1 is worryingly close to the limit.

Bear in mind, we have been here before. In February 2019, OpenAI announced it had developed GPT-2, then a cutting-edge large language model (LLM), but said it wouldn’t release the full version “due to our concerns about malicious applications of the technology”. Nine months later, those concerns had apparently evaporated, as OpenAI released GPT-2 and the world didn’t end.


Certainly, some AI researchers fear artificial superintelligence, the moment at which AI chatbots become smarter than humans – and therefore humanity is in thrall to them, rather than the other way around. But much of this “doomer” attitude to AI is driven by the belief that the catastrophic impacts of a rogue AI are so dire that even a small chance of it happening is worth extreme action – nations should “be willing to destroy a rogue datacenter by airstrike”, according to leading AI safety proponent Eliezer Yudkowsky.


On the other end of the scale, researchers like Emily Bender at the University of Washington, co-author of a seminal 2021 paper on LLMs, say the technology is nothing more than a “stochastic parrot”, repeating back to us its training data, and has no original thought. Instead, researchers like Bender worry more about what goes into the making of these models, and the biases they can amplify when used at a large scale.


The release of ChatGPT in 2022 thrust this AI safety discourse into the limelight. The UK and US set up AI safety institutes in response to the step change in technology, while the then UK prime minister Rishi Sunak rushed to host a global summit on AI safety.

OpenAI CEO Sam Altman has savvily exploited this attention, spending the past few years walking a tight line between promising utopia or apocalypse, as if those are the only two options. Simply put, OpenAI’s warnings are a form of marketing. Saying your latest flagship model is right on the cusp of being so intelligent it can’t be publicly released is good for business – and as OpenAI is reportedly seeking to raise money for further development at a valuation of $150 billion, the firm could use some good business. It is also worth noting that o1 is categorically not GPT-5, the much-awaited next iteration of the firm’s LLM, for which development is progressing more slowly than some had anticipated. In fact, OpenAI has said o1 often performs worse than GPT-4o, its latest mainstream model, at certain tasks.


Of course, OpenAI could choose to be less transparent and not share any details of its models and the accompanying safety concerns – but lifting the curtain on its self-marked homework is self-serving. Explaining that your models are risky, but that you have addressed and contained those risks, helps get ahead of regulation and focuses politicians’ attention away from other issues where companies are less keen on transparency.


For example, AI models use up huge amounts of energy and natural resources. Quite how much, we don’t know: the companies refuse to say. The data these models are trained on is often biased and used without permission. None of these issues has received as much scrutiny as the existential riskiness of AI development – perhaps by design.


So, when reading about “medium” risks and butting up against the line of acceptable dangers from AI companies, consider that warnings about consumer products fall broadly into two categories. There are those required by regulations, such as cautions about misusing medicines or cleaning products, and then there are those that serve to thrill or titillate, like warnings that a rollercoaster might be too intense. With a malevolent superintelligent AI still a sci-fi dream/nightmare, OpenAI’s announcement falls in this latter category – it wants us scared, but not so scared that we won’t use its products.


Chris Stokel-Walker is a freelance technology journalist

Post a Comment

0 Comments