GPUltima-CI

Call for price

GPUltima-CI is a cutting-edge, flexible high-power computing solution for mixed workload data centers. This flexibility serves as an invaluable solution to many HPC applications such as AI, deep learning, image processing and scientific modeling. GPUltima-CI makes the data center workload-centric, where the hardware truly adapts to the needs of applications, rather than applications trying to adapt to limited hardware.

SKU: GPUltima-CI Categories: , Tags: , ,

Description

GPUltima-CI features the flexibility of disaggregated composable infrastructure that increases GPU accelerator utilization in mixed workload datacenters. With composable infrastructure, unused GPU, storage and network resources from one application are automatically released to other resource-hungry applications on other server nodes resulting in increased resource utilization.

The GPUltima-CI is a power-optimized rack that can be configured with up to 32 dual Intel Xeon Scalable Architecture compute nodes, 64 network adapters, 48 NVIDIA® Volta™ GPUs, and 32 NVMe drives on a 128Gb PCIe switched fabric, and can support tens of thousands of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose any combination of GPU, NIC and storage resources as may be required in today’s mixed workload data center.

Features

  • Single or multiple 19” racks
  • Up to 128 PCIe or SXM2 NVIDIA Volta GPUs
  • Up to 96 PCIe or U.2 NVMe drives
  • Up to 32 Dual Intel Xeon Scalable Processors nodes
  • Up to 32 100Gb Infiniband or Ethernet NICs
  • Up to 48-port Liqid Grid PCIe Fabric interconnect
  • Power distribution and cooling up to 52kW per rack

Specifications:

Rack
  • 42U tall 1200mm traditional rack or Scale Matrix DDC
  • Also available in 24U, 44U and 48U tall versions
  • Supports OSS GPU Accelerators, NVMe Flash Storage Arrays, PCIe fabric switches and quad-node servers
Compute Accelerators
    • 3U SCA8000 8-way SXM2 V100 expansion with up to four 128Gb PCIe fabric connections
    • 4U EB3600 8-way PCIe V100 expansion with up to four 128Gb PCIe fabric connections
    • Half 4U EB3450 4-way PCIe V100 expansion with up to two 128Gb PCIe fabric connections
GPUsSXM2 V100 with NVLink

  • 5,120 Cuda cores, 640 Tensor cores
  • 7.8 Tflops Double-Precision
  • 15.7 Tflops Single-Precision
  • 125 Tflops Tensor Performance
  • 300GB/s bi-directional interconnect bandwidth
  • 16GB HBM2 memory
  • 300 watts

PCIe V100

  • 5,120 Cuda cores, 640 Tensor cores
  • 7 Tflops Double-Precision
  • 14 Tflops Single-Precision
  • 112 Tflops Tensor Performance
  • 32GB/s bi-directional interconnect bandwidth
  • 16GB HBM2 memory
  • 250 watts
Flash Storage Arrays
  • 2U FSAe-2 24-way U.2 NVMe JBOF with up to two 128Gb PCIe fabric connections
  • 4U 4UV 16-way PCIe NVMe JBOF with up to two 128Gb PCIe fabric connections
NVMe DrivesPCIe SN260

  • 6.4TB, 3 DW/day
  • PCIe3.0 x8 64Gb/s
  • Max Read (128KB): 6.17GB/s
  • Max Write (128KB): 2.2GB/s
  • Random Read IOPS (4KB): 1,200,000
  • Random Write IOPS (4KB): 200,000
  • Write Latency (512B): 20µs

U.2 SN200

  • 6.4TB, 3 DW/day
  • PCIe3.0 x4 32Gb/s
  • Max Read (128KB): 3.35GB/s
  • Max Write (128KB): 2.1GB/s
  • Random Read IOPS (4KB): 835,000
  • Random Write IOPS (4KB): 200,000
  • Write Latency (512B): 20µs
Servers
  • 2U, 4-node, dual Intel Xeon Scalable Processor server. Each node contains:
  • Dual Socket P (LGA 3647) “SkyLake” CPUs up to 28 cores and 3.2GHz
  • Up to 2TB ECC DDR4-2666MHz
  • Two Gen 3 x16 PCIe expansion slots
  • Six 2.5” SATA3 SSDs
  • IPMI, dual USB 3.0 and Disk-on-module support
Infiniband Switch
  • Mellanox 36 port Infiniband switch
  • EDR 100Gb/s, QSFP connectors
  • 1U form factor
Compostable Infrastructure Management
  • Liqid Grid managed switch array
  • up to 8U, 96-ports
  • 128Gbps PCIe fabric per port
  • Fail-over and multi-topology support
  • 1Gb Management port with Xeon D-1548 management CPU
Infiniband Interface Card
  • Mellanox Connect-X5
  • EDR 100Gb/s, QSFP connectors
  • Single or dual port available
  • One card per server
Power Distribution Unit
  • Tripp-Lite Monitored PDU
  • 27.6kW power
  • Input: 380/400V 3 phase, 63A
  • Power monitoring via display and Ethernet
  • 110kW total power ~ 97% over-provisioned
Cables
  • Copper network and fabric cables inside each rack
  • Fiber Infiniband and PCIe fabric cables up to 100m available for multi-r
Software OS, Frameworks and Libraries
    • Operating Systems: CentOS, Ubuntu, Suse, Windows
    • CUDA NVIDIA drivers
    • Optional Pre-installed deep learning frameworks:
    • Torch
    • Caffe2
    • Theano
    • TensorFlow
  • Optional Pre-installed deep learning libraries:
  • MLPython
  • cuDNN
  • DIGITS
  • Caffe on Spark
  • NCCL

Downloads:

Datasheet


Specifications

Dimensions7” H x 17.2” (19” with rack ears) W x 18.5” D
CPUsDual Intel® Xeon® Scalable Processors up to 205W TDP and 28 cores (Sky Lake, Cascade Lake, Cascade Lake-X)

LGA 3647 socket P with 3 UPI chip-to-chip bus up to 10.7GT/s

System Memory16x 288-pin DDR4 DIMM sockets

Up to 4TB DDR4-2933MHz 3DS ECC RDIMM or LRDIMM, 1.2V low profile

2933/2666/2400/2133MHz Frequencies in 64GB, 128GB and 256GB capacities each module

Up to 2TB Intel® Optane™ DC Persistent Memory in memory mode (Cascade Lake only)

Expansion Slots8 Drive Version:

  • 4x PCIe 3.0 x16 full height, 10.5” length, double width slots suitable for GPUs
  • 2 x PCIe 3.0 x16 full height, half length, single width slots
  • 1 x PCIe 3.0 x4 full Height, half length, single width slot with x8 physical connector
  • 1x PCIe3.0 x4 M.2 slot for 2280 and 22110 M-Key modules

16 Drive Version:

  • 2x PCIe 3.0 x16 full height, 10.5” length, double width slots suitable for GPUs
  • 4 x PCIe 3.0 x16 full height, half length, single width slots
  • 1 x PCIe 3.0 x4 full Height, half length, single width slot with x8 physical connector
  • 1x PCIe3.0 x4 M.2 slot for 2280 and 22110 M-Key modules
Storage Subsystem
  • 8 or 16 hot-swap configurable SATA-3, SAS-3 or NVMe x4 2.5” x 15mm drive carriers
    • 12Gb SAS-3 or 6Gb SATA-3 SFF-8680 slots -or-
    • NVMe x4 32Gb slots
  • Up to 10 SATA-3 slots use no PCIe slots
  • 8x and 16x SAS-3 slots require 1 and 2 PCIe x16 HHHL slots respectively
  • 8x and 16x NVMe x2 slots require 1 and 2 x16 PCIe HHHL slots respectively
  • Further expansion up to 4PB possible using OSS JBOF expansion systems
  • 1x M.2 x4 and 2x SATA-DOM internal drive connections
On-board devicesIntel® C621 Express chipset

ASPEED AST2500BMC IPMI support for IPMI 2.0 with virtual medial over LAN and KVM-over-LAN support

Network Controllers2x Intel X550 10Gigabit Ethernet each with an RJ-45

Additional 25, 40 and 100Gb Ethernet, 100Gb Infiniband or 32Gb Fiber Channel interfaces available

USB5 USB 3.0 with 2 on rear panel, 2 on front panel and 1 Type A internal

4 USB 2.0 with 2 on rear panel and 2 internal headers

Input/Output7.1HD Audio Header, 1 VGA port, 2 COM ports (1 rear and 1internal header)

2 Disk-on-Module ports

1 Trusted Platform Management TPM 1.2 20-pin header

BIOS128 Mb SPI flash EEPROM with AMI BIOS

Supports PnP, PCI 3.0, ACPI 1.0-4.0, USB keyboard support, UEFI 2.3.1,

1TB BAR1 max size and 256 PCI bus enumeration support

Cooling Fans(4) 16CFM 40x28mm and (2) 60CFM 80x25mm high powered fans mount behind front bezel and cool add-in-cards up to 300w; comes with manual fan speed control dial
ChassisRugged aluminum enclosure

3U honeycomb front bezel with replaceable air filter opens providing access to up to 16 2.5” SATA, SAS, or NVMe SSDs.

Weight45lbs
Power Supply
  • Dual N+1 1200 watt AC 115-240V Power Supplies
  • Dual N+1 48V DC Input Power Supplies
EnvironmentOperating:

  • 5°C to 35°C (41°F to 95°F) at 0 to 915m (3,000ft) altitude
  • 5% to 90% non-condensing relative humidity, max dew point 21°C, max rate of change 5°C/hr

Non-Operating:

  • -20C to 60°C (-40°F to 140°F)
  • 5% to 90% non-condensing relative humidity, max dew point 27°C, max rate of change 5°C/hr
AgencyTested  to conform to the following standards:

  • FCC – Verified to comply with Part 15 of the FCC Rules, Class A
  • Canada ICES-003, issue 4, Class A
  • CE Mark (EN55022 Class A, EN55024, EN61000-3-2, EN61000-3-3)
  • CISPR 22, Class A

Designed  to conform to the following extended standards:

  • NOM-019
  • Argentina IEC60950-1
  • Japan VCCI, Class A
  • Australia/New Zealand AS/NZS CISPR 22, Class A
  • China CCC (GB4943), GB9254 Class A, GB17625.1
  • Taiwan BSMI CNS13438, Class A; CNS14336-1
  • Korea KN22, Class A; KN24
  • Russia/GOST ME01, IEC-60950-1, GOST R 51318.22, GOST R 51318.24, GOST R 51317.3.2,
  • GOST R 51317.3.3
  • TUV-GS (EN60950-1 /IEC60950-1,EK1-ITB2000)
ComplianceRoHS 6 of 6, WEEE