依據歐盟施行的個人資料保護法,我們致力於保護您的個人資料並提供您對個人資料的掌握。
按一下「全部接受」,代表您允許我們置放 Cookie 來提升您在本網站上的使用體驗、協助我們分析網站效能和使用狀況,以及讓我們投放相關聯的行銷內容。您可以在下方管理 Cookie 設定。 按一下「確認」即代表您同意採用目前的設定。
GIGABYTE Demonstrates the Future of Computing at Supercomputing 2023 with Advanced Cooling and Scaled Data Centers
Server platforms feature next-gen AI processors from NVIDIA
November 14, 2023 - Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, continues to be a leader in cooling IT hardware efficiently and in developing diverse server platforms for Arm and x86 processors, as well as AI accelerators. At SC23, GIGABYTE (booth #355) will showcase some standout platforms, including for the NVIDIA GH200 Grace Hopper Superchip and next-gen AMD Instinct™ APU. To better introduce its extensive lineup of servers, GIGABYTE will address the most important needs in supercomputing data centers, such as how to cool high-performance IT hardware efficiently and power AI that is capable of real-time analysis and fast time to results.
Advanced Cooling
For many data centers, it is becoming apparent that their cooling infrastructure must radically shift to keep pace with new IT hardware that continues to generate more heat and requires rapid heat transfer. Because of this, GIGABYTE has launched advanced cooling solutions that allow IT hardware to maintain ideal performance while being more energy-efficient and maintaining the same data center footprint. At SC23, its booth will have a single-phase immersion tank, the A1P0-EA0, which offers a one-stop immersion cooling solution. GIGABYTE is experienced in implementing immersion cooling with immersion-ready servers, immersion tanks, oil, tools, and services spanning the globe. Another cooling solution showcased at SC23 will be direct liquid cooling (DLC), and in particular, the new GIGABYTE cold plates and cooling modules for the NVIDIA Grace CPU Superchip, NVIDIA Grace Hopper Superchip, AMD EPYC™ 9004 processor, and 4th Gen Intel® Xeon® processor.
Modularized AI & HPC Systems
GIGABYTE has been deploying GIGA PODs in leading cloud service providers and has the know-how to assist data centers with scaled infrastructure. These turnkey solutions (or pods) are composed of eight racks with 32 GIGABYTE G593 nodes for a total of 256 NVIDIA H100 Tensor Core GPUs that can achieve 1 exaflop (one quintillion floating point operations per second) of FP8 AI performance. At the GIGABYTE booth is a G593-SD0 server built for Intel Xeon and NVIDIA H100 GPUs, the same platform GIGABYTE used for its most recent submission in MLPerf benchmarks to test AI workloads. For the modularized theme, there are high-density nodes using Arm-based processors and supporting NVMe drives and NVIDIA BlueField-3 DPUs. The 2U H263-V11 has two nodes for the NVIDIA Grace CPU Superchip, and the H223-V10 is for the NVIDIA Grace Hopper Superchip. The last standout system, publicly revealed for the first time, is the GIGABYTE GPU server, the G383-R80, purpose-built for the next gen AMD Instinct™ APU. This is the new generation of AMD Instinct accelerators that show great promise for AI workloads.
Scalable Data Center Infrastructure
GIGABYTE’s G493-SB0 is an NVIDIA-Certified system for NVIDIA L4 Tensor Core and L40 GPUs and has room for eight PCIe Gen5 GPUs and expansion slots for NVIDIA BlueField and ConnectX networking technologies. In the future, it will be officially known as an NVIDIA OVX system. Additionally, following the NVIDIA MGX modular design is the new XH23-VG0. It features a single NVIDIA Grace Hopper with FHFL expansion slots for accelerated, giant-scale AI and HPC applications.
Enterprise Computing
GIGABYTE customers have come to expect bold designs that cater to specific workloads and markets. The first new enterprise server, the S183-SH0, is a slim 1U form factor with dual Intel Xeon processors supporting 32x E1.S form-factor, solid-state drives for a fast, dense storage configuration. Another E1.S supporting server is the H253-Z10, which is a multi-node server with front access. There are two G293 GPU servers that are tailored to AI training or AI inference workloads. The G293-Z43 is an inference specialist that can support sixteen Alveo™ V70 accelerators with four GPU cages that have ample cooling. For an optimally priced GPU server, GIGABYTE has the G293-Z23 that supports higher TDP CPUs and PCIe Gen4 and Gen5 GPUs such as the NVIDIA L40S GPU.
NVIDIA H200 Tensor Core GPU
At SC23, NVIDIA announced the NVIDIA H200 Tensor Core GPU with enhanced memory performance, which GIGABYTE will support with upcoming server models.
The NVIDIA H200 GPU supercharges generative AI and HPC with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200 GPU’s faster, larger memory fuels the acceleration of generative AI and LLMs while advancing scientific computing for HPC workloads. The NVIDIA HGX H200, the world’s leading AI computing platform, features the H200 GPU for the fastest performance. An eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.
For more information, see a complete list of GIGABYTE DLC servers, immersion servers, and immersion tanks.