News
Thanks to hardware offloaded TCP/IP, iWARP RDMA NICs offer high performance and low latency RDMA functionality, and native integration within today’s large Ethernet-based networks and clouds.
Aimed at high-data-rate AI networks, the AMD Pensando Pollara 400 network interface card is the first NIC compliant with the ...
RDMA first became widely adopted in the supercomputing space with InfiniBand but has expanded into enterprise markets and is now being widely adopted over Ethernet networks. RDMA over Converged ...
AMD ships Pollara 400 AI NIC with Ultra Ethernet support and plans Vulcano 800G AI NIC for PCIe Gen6 clusters in 2026.
Designed for space-constrained smart factories, Lucid Vision Labs Inc. has launched an industrial vision camera family aimed ...
One backer estimated an RDMA block would require only 1 mm2 of silicon using a 130-nm process. Adaptec and other companies have already launched Ethernet TOE cards without RDMA costing as much as ...
UEC has published the Ultra Ethernet 1.0 specification. The new standard is explicitly intended for AI clusters and the HPC ...
SAN MATEO, Calif. — A consortium of companies with interests in data center systems has completed initial work on a standard for remote direct memory access (RDMA) over Ethernet networks. The RDMA ...
This will be the first demonstration of Chelsio's T5 40G storage technology — a converged interconnect solution that simultaneously supports all of the networking, cluster and storage protocols.
Hosted on MSN14d
AMD deploys its first Ultra Ethernet ready network card — Pensando Pollara provides up to 400 Gbps performance - MSNOracle Cloud Infrastructure will be the first major cloud provider to deploy AMD's Instinct MI350X GPUs and Pensando Pollara 400GbE Ultra Ethernet NICs as part of a massive zettascale AI cluster.
AMD has unveiled the Pensando Pollara 400, a fully programmable 400 Gigabit per second (Gbps) RDMA Ethernet-ready network interface card (NIC) designed to support AI cluster networking.
According to Dell’Oro Group, the majority of switch ports deployed in AI back-end networks will be 800G ethernet by 2025 and 1.6-terabit ethernet by 2027. AI Driving Evolution ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results