How Upstage Beat the GPU Supply Shortage
"Samsung Cloud Platform GPUaaS"

Customer Success Stories
Customer success stories delve into success stories of busineeses which show a remarkable growth through Samsung SDS services.Key Summary
- Upstage adopts Samsung Cloud Platform GPUaaS to overcome the GPU supply shortage - amid global shortage of GPU supply, Upstage adopted Samsung Cloud Platform GPUaaS instead of implementing its own data center, acheiving three goals - stability, cost efficiency, and rapid development.
- High-performance GPU accelerates AI model development - Samsung Cloud Platform GPUaaS offers high-performance GPUs, which are essential for large-scale LLM development, enabling Upstage to implement a cyclie of AI model test, validation and application.
- Samsung Cloud Plaform GPUaaS, a core growth engine of AI start-ups - Samsung SDS' solid technical support has reliably bolstered Upstage's AI model development.
Company Name: Upstage
Upstage has become the first Korean company that ranked first in "Open LLM Leaderboard," often referred to as the "Billboard of Machine Learning". This achievement places Upstage ahead of global tech giants like Meta and Open AI. Recently, Upstage has secured 100 billion won (approximately $72 million) in Series B funding, solidifying its position as a rising powerhouse in the AI industry. What is a driving forth behind this growth? The answer lies in AI development focused on solving customers' real problems, supported by reliable GPU infrastructure.
"Our fundamental value is to effectively help enterprise customers solve their challenges. To achieve this value, we leverage AI technology as a means," says Minsung Kim, Lead of Cloud Business and LLM Business Development at Upstage. As he mentioned, Upstage focuses not merely on providing AI models but on creating tangible business impact that the customers can truly experience. In this process, Minsung describes Samsung Cloud Platform GPUaaS as Upstage's reliable engine room
'Upstage Delivers a Real Solution, Not a 'Wrapper'.
Thank you for your time, Minsung! Could you please introduce yourself?
Nice to meet you. I’m Minsung Kim and I am leading the cloud and LLM business at Upstage. Upstage is a B2B AI solution provider that leverages two core AI technologies to create products tailored for enterprise customers.
Recently, many AI models have been emerging. How does Upstage differentiate itself from other AI companies?
We've been focusing on developing a technology that addresses customers' real problems. Recently, there
has been a surge of "wrapper-type" services that simply wrap around GPT models, but this trend is
quickly becoming a "bad meme." Simple chatbot solutions alone struggle to provide tangible values to
businesses.
Upstage's approach is distinct from that of traditional AI companies. Amid rapidly changing AI trends,
we focus on solving problems that enterprise customers are actually experiencing rather than merely
chasing trends, thereby creating genuine value through AI technology. We focus on fine-tuning features
and performance tailored to specific industries or detailed work scenarios, then advancing this
refinement. Through this process, we are creating solutions that truly satisfy our customers."

Upstage's "Eye and Brain" Solve Businesses' Challenges
What are two core techonologies of Upstage?
Upstage's two core technologies are as follows: The first is computer vision technology, which plays a role as the "eyes" for enterprises, and the second is generative AI technology, serving as the "brain." Notably, Upstage's Optical Character Recognition (OCR) technology has been globally recognized for its performance. For generative AI, we possess its proprietary "Solar" model, which it has developed independently. Through these two technologies, we are providing various solutions tailored to customers' need. One representative example is AI-powered document processing automation.
"Eyes and Brain"—what a powerful combination! Could you share some real-world application examples?
Upstage's AI-powered document processing automation, driven by its OCR engine, is widely utilized by domestic insurance companies. Since first offered in 2022, this technology has now handled over 60% of the total document volume processed by insurance companies in South Korea. The documents processed by insurance companies do not follow standardized templates. For instance, medical expense receipts are issued by various hospitals such as Hospital A and Hospital B and each hospital has completely different formats. While they contain common information like hospital names, diagnosis details, and claim amounts, the layout and structure vary significantly from one hospital to another.
Previously, insurance companies had to manually input all this information one by one. However, our OCR engine automatically recognizes receipts issued by any hospital and converts them into a database format desired by the insurance companies. Through our AI document processing automation, the insurance companies are saving time and effort in claims and handling process.
GPU Performance Is The Decisive Factor in AI Model Development.
Developing AI models seems to require a lot of GPUs.
Specifically, how do GPUs impact AI models?
When training LLMs, multiple GPUs need to be used simultaneously. For example, when training a single
LLM model with 50 or 100 GPUs, these GPUs need to communicate with each other extensively. However, if
even a single issue occurs during this GPU-to-GPU communication, the entire training process comes to a
halt. Since LLM tasks require significant training time and cost, a single hardware failure causing a
training interruption can result in substantial losses. This is why a swift recovery processe is
critical.
One might ask, "why not use twice as many GPUs with 50% performance instead of high-performance H100
GPUs?" However, in LLM training environments, this approach doesn’t hold up. Increasing the number of
hardware units proportionally raises the probability of failures. Since even a single malfunction forces
the entire training process to halt, the risk of downtime escalates significantly.
Ultimately, a task that can be completed in one month using 50 of H100 GPUs could take more than double
the time with lower-performance GPUs. This is why we leverage high-performance GPU clusters like NVIDIA
H100 for LLM training.

We've Beaten the GPU Supply Shortage with Cloud Strategy.
There has been a global GPU supply shortage. How is Upstage addressing this challenge?
The GPU supply shortage has been a significant issue for companies trying to purchase and manage GPUs
directly. 2023 was an especially challenging year for AI developers. As of last year, the best NVIDIA
GPU was the H100. As demand for H100 surged globally, NVIDIA's stock price soared, but securing GPUs
became increasingly difficult. In this challenging environment, we made a strategic decision.
Instead of building its own data center to directly manage GPUs, we adopted a strategy of utilizing
cloud-based GPUs, such as GPUaaS offered by Samsung Cloud Platform. This approach involves renting
pre-owned GPUs from the cloud and using them as needed. As a result, we have been able to seamlessly
leverage high-performance NVIDIA GPUs despite the global GPU shortage.
Thanks to this strategic decision, Upstage was able to maintain its AI development speed while reducing
initial investment costs and management burden associated with securing GPUs.
Given that efficient use of capital is crucial in a startup environment, leveraging cloud-based GPUs has
provided significant advantages.
Why We Chose GPUaaS: Stability, Cost, and Speed.
Among many cloud services, what made you choose Samsung Cloud Platform GPUaaS?
We started using Samsung Cloud Platform GPUaaS in December last year. There are three main reasons for
our choice; detailed Service Level Agreement (SLA) and systematic technical support, a reasonable cost
structure, and an excellent speed.
Reliable infrastructure is essential for large-scale models like LLMs. Samsung Cloud Platform GPUaaS
offers a dependable environment for AI model training, enabling developers to focus solely on development.
Also, startups like Upstage are naturally sensitive to costs. With GPUaaS, high-performance GPUs can be used on-demand,
only as needed, making it highly cost-efficient. Therefore, GPUaaS is an excellent
alternative, especially for startups with fluctuating usage patterns or spot-specific processing needs
In addition, Samsung Cloud Platform GPUaaS aligns well with Upstage's development philosophy. Our
strategy emphasizes trial and error. We experiment extensively, learn quickly from failures, and rapidly
design new experiments. This cycle is repeated multiple times. Our fast-paced development approach
synergizes perfectly with the robust GPU infrastructure of the Samsung Cloud Platform, which provides
swift support.
How have Upstage's developers reacted to Samsung SDS's GPUaaS service?
Our engineers have been very satisfied. First, thanks to the comprehensive technical support and Service Level Agreement (SLA) mentioned earlier, we've minimized training downtime. Additionally, we've been highly satisfied with the ability to utilize high-performance GPUs at a reasonable cost whenever needed. While it’s difficult to quantify, the exceptional responsiveness of Samsung SDS's experts clearly distinguishes them from other GPU providers. In particular, a representative provided real-time support, ensuring a seamless experience in both utilizing and managing the service.

The AI Ecosystem Is the Driving Force of National Competitiveness.
Samsung Cloud Platform GPUaaS is helping Upstage enhance its competitiveness!
Moving forward, what efforts do you think Korea needs to make to boost its AI competitiveness? Can you
share your thoughts?
In the future, LLMs will be likely to lead to cartel-like formations at the national level. Therefore,
to enhance AI competitiveness, Korea needs an LLM that truly understands and reflects the unique
characteristics of the country. Especially for domestic companies to stay competitive in the global AI
race, securing independent technological capabilities is crucial.
I believe the AI computing infrastructure expansion plan announced by the Ministry of Science and ICT
this March is highly positive in this regard. Such government policies supporting AI development will
significantly help startups in their R&D endeavors. Upstage is also leveraging this infrastructure to
develop high-quality solutions.
GPUaaS Plays a Role as "The Engine Room" of Upstage.
Lastly, what does Samsung Cloud Platform GPUaaS mean to Upstage?
Samsung Cloud Platform GPUaaS serves as a "reliable engine room" for Upstage. When Upstage develops AI
models and provides products based on these models, it is crucial to meet the timelines desired by
customers. Therefore, in AI development, time is the most critical resource. The key lies in how well,
how quickly, and how efficiently the models can be trained to produce excellent results. GPUaaS plays a
central role in this process.
Samsung Cloud Platform GPUaaS acts as an engine that directly accelerates Upstage's speed toward
achieving its mission. With a reliable infrastructure, Upstage is creating a virtuous cycle of rapid
experimentation, validation, and customer application.

Efficiently Leverage High-Performance GPUs with GPUaaS.
Reliable and robust GPU infrastructure is essential for the development of innovative AI models.
Samsung Cloud Platform GPUaaS provides the latest high-performance GPU infrastructure for AI model
training and inference, offering scalability and flexibility. Need to accelerate AI research and
development without upfront costs? Discover Samsung Cloud Platform GPUaaS. Samsung SDS is partnered with
leading innovators like Upstage to drive the growth of Korea’s AI ecosystem.
DeepSeek's Success Story: Now It's Your Turn
Begin AI innovation joureny with Samsung Cloud Platform GPUaaS

B2B marketer with a big passion for data modeling and AI
Deliver accurate insight and wholehearted effort to help customers achieve success