Multi-node GPU Cluster uses Bare Metal Server, which is embedded with highperformance NVIDIA SuperPOD architecture. Using the GPU, it handles multiple user jobs or high-performance distributed workload of large-scale AI model learning.
Through integration with the network resources of Samsung Cloud Platform, Multinode GPU Cluster can handle high-performance AI jobs. By configuring GPU direct RDMA (Remote Direct Memory Address) using InfiniBand switch, it directly processes data IO between GPU memories, enabling high-speed AI/Machine learning computation.
Multi-node GPU Cluster supports integration with various storage resources on the Samsung Cloud Platform. A high-performance SSD NAS File Storage is available directly integrated with high-speed networks, and integration with Block Storage and Object Storage is possible.