Decart uses AWS Trainium3 for real-time video generation

Date:

Amazon Web Services has scored another big win with its custom AWS Trainium accelerator after signing a deal with AI video startup Decart. With this partnership, Descart will optimize its flagship model Lucy on AWS Trainium3 to support real-time video generation, highlighting the growing popularity of AI accelerators over Nvidia’s graphics processing units.

Decart will essentially be fully powered by AWS, and as part of the deal, its models will be available through the Amazon Bedrock platform. Developers can integrate Descart’s real-time video generation capabilities into almost any cloud application without worrying about the underlying infrastructure.

Distribution through Bedrock improves the plug-and-play capabilities of AWS and demonstrates Amazon’s confidence in the growing demand for real-time AI video. You can also expand Descart’s reach and increase its adoption in the developer community. AWS Trainium provides Lucy with the additional processing power she needs to produce high-fidelity video without sacrificing quality or latency.

Custom AI accelerators like Trainium provide an alternative to Nvidia’s GPUs for AI workloads. Nvidia still dominates the AI ​​market, with its GPUs processing the majority of AI workloads, but it faces a growing threat from custom processors.

Why all the fuss about AI accelerators?

AWS Trainium isn’t the only choice for developers. Google’s Tensor Processing Unit (TPU) product line and Meta’s Training and Inference Accelerator (MTIA) chips are also examples of custom silicon, and each has similar advantages over Nvidia’s GPU, or ASIC architecture (application-specific integrated circuit). As the name suggests, ASIC hardware is specifically designed to handle specific types of applications and do so more efficiently than general-purpose processors.

While central processing units are commonly considered the Swiss Army knife of computing because they can handle multiple applications, GPUs are more like powerful electric drills. They are much more powerful than CPUs and are designed to handle large amounts of repetitive, parallel computation, making them suitable for AI applications and graphics rendering tasks.

If a GPU is a power drill, an ASIC might be thought of as a scalpel designed for extremely precise procedures. When building an ASIC, chip manufacturers remove all task-unrelated functional units to increase efficiency, dedicating all operations to the task.

This provides significant performance and energy efficiency benefits compared to GPUs and may explain their increasing popularity. A great example of this is Anthropic. Anthropic is partnering with AWS on Project Rainier, a massive cluster of hundreds of thousands of Trainium2 processors.

Anthropic says Project Rainier provides hundreds of exaflops of computing power to run cutting-edge AI models, including Claude Opus-4.5.

AI coding startup Poolside also uses AWS Trainium2 for model training and plans to use the infrastructure for inference in the future. Meanwhile, Anthropic is hedging its bets and is also considering training future Claude models on clusters of up to 1 million Google TPUs. Meta Platforms is reportedly working with Broadcom to develop a custom AI processor to train and run Llama models, and OpenAI has similar plans.

Advantages of Trainium

Decart chose AWS Trainium2 because of its performance to achieve the low latency needed for real-time video models. Lucy’s time to first frame is 40ms. This means it will start generating the video immediately after the prompt. By streamlining video processing with Trainium, Lucy can even match the quality of slower, more established video models like OpenAI’s Sora 2 and Google’s Veo-3, with Descart producing output at up to 30 fps.

Descartes believes that Lucy will improve. As part of the deal with AWS, the company received early access to the newly announced Trainium3 processor, which is capable of up to 100 fps output and lower latency. “Trainium3’s next-generation architecture delivers higher throughput, lower latency, and better memory efficiency, enabling up to four times faster frame generation at half the cost of GPUs,” Decart co-founder and CEO Dean Leitersdorf said in a statement.

Nvidia may not be too worried about custom AI processors. The AI ​​chip giant is reportedly designing its own ASIC chips to rival its cloud competitors. Additionally, each chip has its own strengths, so ASICs won’t completely replace GPUs. The flexibility of GPUs means they remain the only viable option for general-purpose models like GPT-5 and Gemini 3, and remain mainstream in AI training. However, many AI applications have robust processing requirements that make them particularly well-suited to running on ASICs.

The rise of custom AI processors is expected to have a major impact on the industry. By pushing chip designs to greater levels of customization and enhancing performance for specialized applications, we are poised for a new wave of AI innovation with real-time video at the forefront.

photograph Courtesy of AWS re:invent

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Who is Robert Mueller? The former FBI director who was an opponent of President Trump has died.

President Trump accuses President Obama of treason over 2016...

What you need to know about Robert Mueller, former FBI director and President Trump’s enemy

The former Marine overhauled the FBI after the 9/11...

Trump says he’s ‘glad’ after Robert Mueller’s death

President Trump accuses President Obama of treason over 2016...

March Madness Friday Results

UConn looks for another perfect March Madness titleUSAT's Sam...