News & Updates
Amazon to use Nvidia technology in new AI chips as it prepares major server rollout
Amazon Web Services announced on Tuesday that it will integrate Nvidia technology into its next generation of artificial intelligence chips, deepening the long standing partnership between the two companies as demand for AI computing continues to accelerate worldwide.
The cloud provider said the new chips will support Nvidia’s latest graphics processing units and networking systems, enabling faster model training and more energy-efficient inference for businesses using Amazon’s cloud-based platforms. The move marks a significant shift in Amazon’s chip strategy, which has traditionally relied on its in-house Trainium and Inferentia processors for AI workloads.
According to AWS executives, Nvidia’s technology will be incorporated into a new line of servers scheduled to roll out globally in the second half of 2025. The servers will be equipped with advanced accelerators designed to handle increasingly complex AI models used in areas such as autonomous systems, generative AI, and scientific research. Amazon said the servers will initially launch in selected data centres in the United States before expanding to Europe and Asia.
The decision comes as major technology companies race to secure enough high performance chips to support the rapid growth of AI based products and services. Nvidia has emerged as the dominant supplier of AI processors, and its hardware remains in short supply due to unprecedented global demand. By expanding its partnership with Nvidia, AWS aims to ensure that large enterprise customers can continue to scale their AI ambitions without facing hardware shortages.
Swami Sivasubramanian, vice president for data and AI at AWS, said the collaboration would give customers more flexibility in choosing the right computing options for their workloads. He added that AWS would continue developing its in house chips but acknowledged that Nvidia’s ecosystem offers unmatched performance for certain tasks.
Industry analysts said the announcement underscores how cloud providers are increasingly blending in house technology with specialised hardware from third party chipmakers to meet customer expectations. As AI models grow larger and more computationally intensive, companies are prioritising reliability and speed over attempts to maintain fully independent chip architectures.
Nvidia chief executive Jensen Huang welcomed the expanded partnership, saying the new AWS servers will allow developers to deploy generative AI applications at greater scale. He also noted that the collaboration strengthens Nvidia’s position in the cloud computing market, where competition among Amazon, Microsoft and Google remains intense.
Amazon did not disclose the expected volume of Nvidia based chips it plans to deploy but said it is investing billions of dollars to upgrade data centre infrastructure. The company is also expanding its renewable energy portfolio to support the increased power demands of AI computing.
With the integration of Nvidia technology and the introduction of new servers, AWS is positioning itself to remain a leading provider of cloud based AI tools at a time when global businesses are rapidly accelerating their adoption of machine learning and generative AI.
