Servers and storage systems from trusted brands with warranties and support.
Over 100 server and storage models from Dell, HPE, Lenovo, and Huawei. Browse by brand, category, or generation. All items are available to order directly from our warehouse in China with delivery to Kazakhstan.
The catalog includes 1U and 2U rack servers for virtualization, VDI, and cloud workloads, high-density GPU servers for AI and ML (NVIDIA H100, H200, L40S, A100), enterprise all-flash storage systems, and hybrid arrays. Each model's specifications include processor type, memory capacity, drive configuration, and form factor.
You can order a server from Almaty using the request form – specify the model or describe your needs, and we'll prepare a quote. For custom configurations, use the online GPU server configurator.
Flagship with integrated 8-GPU modules for training neural networks.
4U with up to 10 DW GPUs for AI inference.
2U GPU server with DDR5 and PCIe 5.0.
4U GPU server for AI inference.
Flagship for supercomputer clusters.
Supercomputing platform with DLC.
4U AI platform V4 with the latest GPUs.
2U AI-optimized version of SR650 V4.
HPC node AMD + NVIDIA GPU with Neptune.
HPC node on AMD EPYC with Neptune.
Node with NVIDIA GPU and Neptune DWC.
Node with Xeon Max Series (HBM) and Neptune.
Neptune direct liquid cooled unit.
4U AI server with DLC for LLM training.
4U AI platform on AMD EPYC + 8× NVIDIA H100/H200.
4U AI platform with 8× SXM5 GPU.
5U liquid cooling, 8× NVIDIA H200 / B200 (Blackwell).
Composable HPC module for AI clusters.
2U on NVIDIA GH200 NVL2 (Grace CPU + Hopper GPU).
4U AI version with NVIDIA GH200 NVL2 support.
HPC node for Apollo 2000 Gen10.
A higher density GPU variant of the Server 9712.
Platform with DLC for NVIDIA GB200 rackmount solutions.
Flagship platform for supercomputer clusters.
XE9785 liquid-cooled data center chassis.
AI platform on AMD MI300X with DLC.
Extreme AI platform for LLM learning, DLC.
Advanced AI platform for LLM with 8 GPUs.
A next-generation server platform for LLM training.
2U R770 for AI inference - up to 6 GPUs.
6U AMD MI300X liquid cooled platform.
6U liquid cooling XE9680 for DLC data centers.
6U flagship for LLM education - 8× NVIDIA H100/H200.
2U liquid cooling with 4× NVIDIA H100.
4U AI platform with 4× NVIDIA H100.
4U AI server with AMD EPYC + 4× NVIDIA A100.
4U AI platform for distributed learning.
2U dedicated server for AI inference and analytics.