


Multi-node GPU Cluster uses Bare Metal Server, which is embedded with highperformance NVIDIA SuperPOD architecture. Using the GPU, it handles multiple user jobs or high-performance distributed workload of large-scale AI model learning.
Through integration with the network resources of Samsung Cloud Platform, Multinode GPU Cluster can handle high-performance AI jobs. By configuring GPU direct RDMA (Remote Direct Memory Access) using InfiniBand switch, it directly processes data IO between GPU memories, enabling high-speed AI/Machine learning computation.
Multi-node GPU Cluster supports integration with various storage resources on the Samsung Cloud Platform.
A high-performance SSD NAS File Storage directly integrated with high-speed network or NVMe parallel filesystem storage is available for use, and integration with Block Storage and Object Storage is possible.


Whether you’re looking for a specific business solution or just need some questions answered, we’re here to help