Criticisms about AI safety from Openai researchers aiming to become rivals have opened the window into the industry’s struggle. It’s a battle against itself.
It started with a warning from Boaz Barak, a Harvard professor who is currently on vacation and working safely at Openai. He called the launch of the Xai Grok model “completely irresponsible.” Not for that headline grab attitude, but for what is missing: public system cards, detailed safety ratings, fundamental artifacts of transparency that have become vulnerable norms.
It was a clear and necessary call. However, the candid remorse posted just three weeks after leaving the company from former Openy engineer Calvin French-Owen shows the remaining half of the story.
France-Owen’s explanation suggests that many people in Openry are actually working safely and focus on very real threats such as hate speech, bioware pomp and self-harm. But he conveys insight: “Most of the work done is not public,” he writes, adding that Openai “should really do more.”
Here, the simple story of a good actor scribing bad things falls apart. Instead, we see the industry-wide dilemma exposed. The entire AI industry is caught up in the “safety speed paradox.” This is a deep structural conflict between the need to move at fierce speeds for competition and the moral need to move with caution to keep us safe.
France’s Owen suggests that Openai is in a state of controlled chaos, tripling its personnel to over 3,000 in a year. This chaotic energy is guided by the immense pressure of the “three horse race” on Google and AGI on humanity. As a result, it’s an incredible culture of speed, but it’s also one of the secrets.
Consider creating Codex, the Openai coding agent. France’s Owen called the project the “Mad Dash Sprint,” and the small team built an innovative product from scratch in just seven weeks.
This is an example of a speed textbook. Explaining that you work most nights and even midnight to make it happen. This is the human cost of that speed. In this moving environment, isn’t it strange that the slow, systematic work of publishing AI safety research feels like a distraction from race?
This paradox wasn’t born out of malice, it’s not a powerful set of interlocking forces.
There is obvious competitive pressure to be the first. There is also the cultural DNA of these labs, starting as a loose group of “scientists and tinkerers,” and there are breakthroughs that change the value around systematic processes. There are also simple problems with measurements. It is easy to quantify speed and performance, but it is very difficult to quantify a very difficult disaster.
In today’s meeting rooms, visible metrics of speed will most likely scream more eloquently than the success of invisible safety. However, to move forward, it’s not about pointing your fingers. It must be about changing the basic rules of the game.
You need to redefine the meaning of shipping a product and integrate the disclosure of the safety case in the same way as the code itself. There is a need for industry-wide standards that prevent single companies from being punished competitively for their hard work.
However, more than anything, we need to cultivate a culture within the AI lab where not only the safety department but all engineers feel responsible.
The race that creates an AGI is not who gets there first. It’s about how we arrive. The true winner is not just the fastest company, but the company that proves to the world that ambition and responsibility can and must move forward together.
(Photo: Olamigok Jr.)
reference: Military AI contract awarded to humanity, Openai, Google and Xai
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.
Check out other upcoming Enterprise Technology events and webinars with TechForge here.