Alibaba has released a new AI coding model called QWEN3-Coder, built to handle complex software tasks using large open source models. The tool is part of Alibaba’s QWEN3 family and is advertised as the most advanced coding agent to date.
This model uses a mix of expert (MOE) approaches, activating 35 billion parameters out of a total of 480 billion, and supports a context of up to 256,000 tokens. The number can reportedly be stretched to one million using special extrapolation techniques. The company claims that QWEN3-Coder is superior to other open models of Agent Task, such as the Moonshot AI and DeepSeek versions.
But not everyone sees this as good news. Cybernews Editor-in-Chief Jurgita Lapienyė warns that QWEN3-Coder could be more than just a useful coding assistant. This could pose a real risk to global technology systems if widely adopted by Western developers.
Alibaba’s message surrounding QWEN3-Coder focuses on its technical strength, comparing it to Openai and the top-level tools of humanity. But while the benchmark scores and features are attracting attention, Lapienyė suggests that it can also be distracted from the real issue of security.
It’s not that China is catching up with AI.それはすでに知られています。 A deeper concern is about the hidden risks of using software generated by AI systems that are difficult to inspect or are difficult to fully understand.
As Lapienyė said, developers could be “famous with the future.” There, the core system is built without realizing it with vulnerable code. Tools like QWEN3-Coder may make life easier, but they can also introduce subtle weaknesses that are not noticed.
This risk is not a hypothetical. Researchers at Cybernews recently reviewed the use of AI across major US companies and found that 327 out of the S&P 500 have been published using AI tools. In these companies alone, researchers have identified around 1,000 AI-related vulnerabilities.
Adding another AI model, especially another model developed under China’s strict national security law, can add another layer of risk.
When the code becomes a backdoor
Today’s developers are leaning heavily towards AI tools, writing code, fixing bugs, and shaping how they build applications. These systems are fast, friendly and get better every day.
But what happens if those same systems are trained to inject defects? Not an obvious bug, but difficult problem with small spots that don’t trigger alarms. Vulnerabilities that appear to be harmless design decisions can be undetected for years.
That’s how supply chain attacks begin. Previous examples such as the SolarWinds incident show how long it takes place quietly and patiently. With ample access and context, AI models can learn how to plant similar problems, especially when exposed to millions of codebases.
It’s not just a theory. Under China’s National Intelligence Act, companies like Alibaba must cooperate with government requests, including those that include data and AI models. This will shift the conversation from technical performance to national security.
Another big problem is data exposure. When developers use tools like QWEN3-Coder to write or debug code, all parts of their interaction can reveal sensitive information.
It may include designing your own algorithms, security logic, or infrastructure. This is a kind of detail that is useful for foreigners.
The model is open source, but cannot be viewed to users yet. Backend infrastructure, telemetry systems, and how you track your usage may not be transparent. Therefore, it is difficult to know where the data goes and what the model remembers over time.
Unsurveillance autonomy
Alibaba also focuses on agent AI. This is a model that allows you to act more independently than a standard assistant. These tools don’t just suggest lines of code. They can assign complete tasks, work with minimal input, and make their own decisions.
It may sound efficient, but it also raises the red flag. A completely autonomous coding agent that can scan the entire codebase and make changes can be dangerous with the wrong hands.
Imagine an agent who understands the defenses of a company’s systems and can coordinate attacks to exploit them. The same skill set that helps developers move faster can be reused by attackers and move faster.
Despite these risks, current regulations do not address tools like QWEN3-Coder in a meaningful way. The US government has been debating data privacy concerns related to apps like Tiktok for years, but there is little public surveillance of foreign-developed AI tools.
Groups like the US Foreign Investment Commission (CFIUS) review company acquisitions, but there is no similar process for reviewing AI models that could pose national security risks.
President Biden’s executive order on AI focuses primarily on homemade models and common safety practices. However, it rules out concerns about imported tools that could be incorporated into sensitive environments such as healthcare, finance, or domestic infrastructure.
AI tools that allow you to write or modify code must be treated at the same severity as software supply chain threats. That means setting clear guidelines on where and how they can be used.
What should happen next?
To reduce risk, organizations dealing with sensitive systems must be suspended before integrating QWEN3-Coder (Foreign Development Agent AI) into their workflows. If you don’t invite someone you don’t trust to see your source code, why have their AI rewrite it?
Security tools need to keep up too. Static analytic software may not detect complex backdoors or subtle logic problems created by AI. The industry needs new tools specifically designed to flag and test suspicious patterns of AI generated code.
Finally, developers, technology leaders, and regulators need to understand that code generation AI is not neutral. These systems have power as a useful tool and potential threat. The same features that make them convenient can also put them in danger.
Lapieny calls qwen3-coder a “potential Trojan horse” and the ratiophor fits.生産性だけではありません。 It’s about who’s inside the gate.
Not everyone agrees that it matters
Wang Jian, founder of Alibaba Cloud, sees things differently.とのインタビューで Bloomberghe said innovation is not about hiring the most expensive talent, but about choosing people who can build the unknown. He criticized Silicon Valley’s approach to AI employment. There, the tech giants compete for top researchers, like sports teams bidding on athletes.
“The only thing you need to do is get the right person,” the king said. “I’m not a really expensive person.”
He also considers the Chinese AI race to be healthy, not hostile. According to Wang, businesses move forward in turn. This will help the entire ecosystem grow faster.
“Because of this competition, technology can be repeated very quickly,” he said. “I don’t think it’s cruel, but I think it’s very healthy.”
Still, open source competition does not guarantee trust. Western developers need to think carefully about which tools they use and who built them.
Conclusion
QWEN3-Coder may offer impressive performance and open access, but its use involves risks beyond benchmarking and coding speeds. In an age where AI tools shape the building of critical systems, it’s worth asking not only what these tools can do, but who will benefit when doing it.
(Photo: Shahadat Rahman)
See also: Alibaba’s new Qwen Reasoning AI model sets open source records
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.
Check out other upcoming Enterprise Technology events and webinars with TechForge here.