(Taipei, Taiwan – July 9, 2024) Leading provider of advanced AI solutions AAEON (Stock Code: 6579), has released the inaugural offering of its AI Inference Server product line, the MAXER-2100. The MAXER-2100 is a 2U Rackmount AI inference server powered by the Intel® Core™ i9-13900 Processor, designed to meet high-performance computing needs.
The MAXER-2100 is also able to support both 12th and 13th Generation Intel® Core™ LGA 1700 socket-type CPUs, up to 125W, and features an integrated NVIDIA® GeForce RTX™ 4080 SUPER GPU. While the product’s default comes with the NVIDIA® GeForce RTX™ 4080 SUPER, it is also compatible with and an NVIDIA-Certified Edge System for both the NVIDIA L4 Tensor Core and NVIDIA RTX™ 6000 Ada GPUs.
Given the MAXER-2100 is equipped with both a high-performance CPU and industry-leading GPU, a key feature highlighted by AAEON upon the product’s launch is its capacity to execute complex AI algorithms and datasets, process multiple high-definition video streams simultaneously, and utilize machine learning to refine large language models (LLMs) and inferencing models.
Given the need for latency-free operation in such areas, the MAXER-2100 offers up to 128GB of DDR5 system memory through dual-channel SODIMM slots. For storage, it includes an M.2 2280 M-Key for NVMe and two hot-swappable 2.5” SATA SSD bays with RAID support. The system also provides extensive functional expansion options, including one PCIe x16 slot, an M.2 2230 E-Key for Wi-Fi, and an M.2 3042/3052 B-Key with a micro SIM slot.
For peripheral connectivity, the server boasts a total of four RJ-45 ports, two running at 2.5GbE and two at 1GbE speed, along with four USB 3.2 Gen 2 ports running at 10Gbps. In terms of industrial communication options, the MAXER-2100 grants users RS-232/422/485 via a DB-9 port. Multiple display interfaces are available, thanks to HDMI 2.0, DP 1.4, and VGA ports, which leverage the exceptional graphic capability of the server's NVIDIA® GeForce RTX™ 4080 SUPER GPU.