Athena

You are here:Home-Resources-Athena
Athena 2017-11-14T11:20:54+00:00

Image Credit

Specifications

The Athena deep learning GPU server supports the following performance specifications:

  • GPU aggregate throughput of:
    • Floating Point (FP) 16: at least 170 Teraflops (TFLOPs)
    • Floating Point (FP) 32: at least 85 Teraflops (TFLOPs)
    • Floating Point (FP) 64: at least 42.5 Teraflops (TFLOPs)
  • Minimum 8 GPUs, interconnected with a protocol that allows higher speeds and fast memory access
  • Unified memory between the GPU and Central Processing Unit (CPU)
  • Tuned and preloaded with freely-available and commonly-used Deep Learning frameworks

In addition, the Athena server has the following physical specifications:

  • Contains 8x Tesla P100 GPU cards
  • GPU memory of at least 128GB HBM2 (8x 16GB)
  • System Memory of at least 512 GB 2133 MHz Double Data Rate 4th-generation (DDR4) Synchronous Dynamic Random-Access Memory (SDRAM)
  • Minimum networking requirement of 4x Enhanced Data Rate (EDR) InfiniBand and 2x 10GigE
  • CPU contains 2x microprocessors, each with at least 16 cores, at least 32 threads, a minimum of 2.3 GHz frequency and 40MB shared cache
  • GPUs have at least 28672 cores (8x 3584 cores)
  • Storage minimum of 4x 1.92 TB Solid State Drive (SSD) Redundant Array of Independent Disks (RAID) 0
  • Physical weight is at most 65 kg
  • Physical dimensions is 86.6 D/ 44.4 W/ 13.1 H (cm) for 3 Rack Units (RU) Rackmount
  • Maximum power requirements 3200W
  • Operating temperature range 10 – 30° C

The Athena is connected to a Network-Attached Storage (NAS) disk drive, for data-intensive tasks:

  • Dual Controllers
  • Networking ports 4x 10GbE,
  • Hard Disk Drives (HDDs) 8 x 6TB 7200 rpm
  • Dimensions of 2 rack units (2RU)

Two Development Servers

Testing and development will be supported by 2 units of reliable high-performance Graphics Processing Unit (GPU) servers. Each unit shall meet the following key performance requirements:

  • At least 7 Teraflops (TFLOPs) of Floating Point (FP) 32 throughput per GPU
  • At least 336.5 GB per second (GB/s) of memory bandwidth per GPU
  • Tuned and preloaded with freely-available and commonly-used Deep Learning frameworks

 Each unit of GPU server in the System shall meet the following physical requirements:

  • Contains at least 4 GPUs
  • At least 12 Gigabytes (GB) of graphics double data rate type five (GDDR5) memory per GPU
  • At least 64GB double data rate 4th-generation (DDR4) of system memory.
  • Minimum networking requirement of 4-way peripheral component interconnect express (PCIe) x 16 slots.
  • Network port for 1 Gigabit Ethernet (GbE)
  • At least 1x CPU microprocessor with 6 cores, 3.5 Gigahertz (GHz) frequency and 15MB cache.
  • At least 3072 cores per GPU
  • Storage (data) of 8×4 Terabytes (TB) 7200 revolutions per minute (RPM) 3.5dz serial advanced technology   attachment (SATA) hard disk drive (HDD) in redundant array of independent disks (RAID) 6.
  • Storage (operating system) minimum of 250GB SATA internal SSD
  • Physical weight to be at most 23kg
  • Physical dimensions to not exceed 87 D x 45 W x 8.9 H (cm) – 2 Units (U) rackmount
  • Maximum power requirements 2000 watts (W)
  • Operating temperature range 10 – 30°C.