AMD Pensando Pollara 400 AI NIC LaunchedThe first UEC 1.0 network card for data centers

Last October, AMD  launched the industry’s first UEC 1.0 specification network card, called “AMD Pensando Pollara 400 AI NIC”, for HPC and AI data centers, which can reduce the complexity of performance tuning and help shorten output time, and is expected to bring a six-fold performance improvement to AI workloads. In addition to performance improvements, it is also expected to enhance the scalability and reliability of artificial intelligence infrastructure, making it more suitable for large-scale deployment.

Now AMD has announced that the AMD Pensando Pollara 400 AI NIC is now available and shipping to customers. AMD said it believes in retaining customer choice by providing customers with easily scalable solutions, and in an open ecosystem, reducing total cost of ownership without sacrificing performance.

Although the Ultra Ethernet Consortium (UEC) has postponed the release of the 1.0 specification from the third quarter of 2024 to the first quarter of 2025, the AMD Pensando Pollara 400 AI NIC already provides support and is the industry’s first network card that meets the UEC 1.0 specification.

Currently, the Super Ethernet Alliance has 97 members, an increase from 55 members in March 2024. It is understood that the UEC 1.0 specification is to expand the ubiquitous Ethernet technology based on the performance and characteristics of AI and HPC workloads, on the one hand, to use as much of the original technology as possible to maintain cost efficiency and interoperability, and on the other hand, to maximize efficiency through separate protocols.

AMD Pensando Pollara 400 AI NIC is a 400 GbE UEC product, a network processor designed by AMD’s Pensando division ( 
acquired for $1.9 billion in April 2022 ). It is designed to optimize HPC and AI data center networks, with programmable hardware pipelines, programmable RDMA transmission, programmable congestion control, and communication library acceleration functions to maximize the use of AI clusters and reduce latency, maintaining uninterrupted communication between CPUs and GPUs.

Sent an Inquiry

Your information will not be published. Required fields are marked *






    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses cookies to offer you a better browsing experience. By browsing this website, you agree to our use of cookies.