Intelligent Computing

AI Server Systems for the Next Generation of Intelligence

Flagship AI training & inference servers with up to 8 GPUs, supporting NVIDIA and AMD accelerators for large-scale model development.

Up to 8 GPUs

Support NVIDIA H100/H200, AMD MI300X OAM modules for maximum AI compute density.

PCIe Gen5 / NVLink

High-bandwidth GPU interconnect for efficient distributed training and inference.

Advanced Cooling

Liquid cooling ready with optimized airflow design for sustained peak performance.

Modular Design

Flexible configuration with hot-swappable components for rapid deployment and maintenance.

NOVA SERIES

Intelligent Computing Products

Nova-8GPU-OAM-G1

Nova-8GPU-OAM-G1

8U | 8x OAM GPU | AMD EPYC Turin | 32x DDR5 | 2x 10GbE + 4x 400G InfiniBand

Flagship OAM AI server designed for large-scale model training. Supports next-generation GPU modules with industry-leading compute density.

Nova-4GPU-Rack-G1

Nova-4GPU-Rack-G1

4U | 8x PCIe Gen5 GPU | Intel Eagle Stream | 24x DDR5 | 2x 25GbE

High-performance 4U GPU server balancing compute density with thermal efficiency. Ideal for mixed training and inference workloads.

Nova-2GPU-Rack-G1

Nova-2GPU-Rack-G1

2U | 2x PCIe Gen5 GPU | AMD EPYC Genoa | 24x DDR5 | 2x 25GbE

Cost-effective 2U AI inference server for edge data centers and enterprise AI deployment.