Samsung SDS Unveils B300 GPU Service for the First Time in Korea, Maximizing Enterprise AI’s Inference Performance
□ Samsung Cloud Platform (SCP) releases NVIDIA’s latest B300 GPU-powered GPUaaS, a first in Korea
- Leading the GPU ecosystem by providing A100 in 2021, H100 in 2023, and B300 in 2026
□ B300 has more than three times the memory capacity of H100, significantly reducing inference bottlenecks for large language models (LLMs)
- Minimizing latency for high-performance AI services, including AI agents and image, video, and code generation
Samsung SDS launched Korea’s first B300 (Blackwell Ultra)-based GPU-as-a-Service (GPUaaS) on its Samsung Cloud Platform (SCP). B300 is NVIDIA’s latest GPU.
Samsung SDS launched SCP B300 GPUaaS to strategically address the surging demand for high-performance computing, as companies move beyond AI model development into the AI inference stage, where AI models are deployed in real-world services.
Equipped with 12-high HBM3E (High Bandwidth Memory), the B300 GPU delivers 288 GB of memory capacity per GPU and 8 TB/s of memory bandwidth. Compared to H100, B300’s memory capacity is 3.6 times greater, and the memory bandwidth is 2.4 times higher, significantly improving memory performance for AI inference, which requires complex computations.
This has significantly reduced data bottlenecks that can occur when running large language models (LLMs), where overall performance is degraded by slower memory data transfer speeds compared to the GPU’s high compute throughput.
Samsung SDS has led the development of the GPUaaS ecosystem by proactively providing GPUaaS services powered by A100 in 2021 and the H100 in 2023. With this, the company has positioned GPUs as an AI-dedicated infrastructure in cloud infrastructure implementations, operations, and customer service.
Customers adopting SCP B300 GPUaaS can efficiently process large-scale AI models with its high-capacity memory, minimizing latency in high-performance AI services, including AI agents as well as image, video, and code generation and analysis.
Additionally, using a pay-as-you-go subscription-based model can reduce initial investment risks and enhance cost efficiency. Even during GPU supply constraints, companies can instantly deploy NVIDIA’s latest architecture through SCP and can process sensitive enterprise data in a secure cloud environment supported by Samsung SDS’s security capabilities.
Samsung SDS plans to launch a serverless inference service and an AI training service in the third quarter of this year. The former service charges clients only for tokens used, without separate infrastructure usage fees. Meanwhile, the latter service automatically and instantly initiates distributed AI training once developers input code and data.
Ho-joon Lee, Executive Vice President and Leader of the Cloud Service Business Division at Samsung SDS, said, “By leveraging SCP’s GPU efficiency capabilities, including resource optimization and energy saving, we will provide Korea’s first B300 GPU service to clients -- including large, mid-market, small and medium enterprises (SMEs), and the public sector – who are seeking to apply AI. We seek to support their AI transformation (AX) actively.
Meanwhile, as a Private-Public Partnership (PPP) cloud service provider, Samsung SDS was the first to deploy an H100-based GPUaaS at the Daegu Center of the National Information Resources Service (NIRS). Samsung SDS has conducted various AI projects for the government, including the implementation of a government-wide shared hyperscale AI infrastructure and an intelligent work management platform. Additionally, as a cloud service provider (CSP) supporting high-performance computing projects led by the Ministry of Science and ICT, Samsung SDS provides GPUaaS to a wide range of clients, including approximately 60 AI-related SMEs, startups, research institutes, and universities.