While earlier applications of HPC were primarily limited to academic research and government projects, HPC now stretches across several verticals and markets, including commercial, industrial and consumer use cases. Also known as the democratization of HPC, this widespread use of advanced computing is largely due to the rising importance of data analytics, cloud-based access to HPC clusters, increased interest in artificial intelligence solutions and the convenience of HPC managed services.
The reality is that HPC is essentially responsible for much of today’s product development, which is why the use cases spread across a diverse and highly engaged landscape. Here’s a look at how companies in varying verticals leverage HPC systems and services to enhance performance and drive business growth.
Higher education and research
HPC solutions provide a level of computational power that supports high-end, large-scale research projects in a variety of subject matter, including - but certainly not limited to - structural analysis, mathematical modeling, biology, chemistry, genetics, physics, climate change and psychology.
In fact, advanced computing power is so essential to research projects that universities actively work to optimize their HPC resources to support more academic and scientific analysis opportunities. This includes strengthening the networks and creating central inventories for projects so the research holdings aren’t limited to data islands. Harvard University is among the institutions responding to this need to optimize, largely due to the institution’s rapidly growing research computing department needing stronger infrastructure to support data-intensive workloads and collaboration.
“With three data centers, networking becomes a big focus,” explained Scott Yockel, Harvard’s director of research computing. “Networking can be a limiting factor to creating good collaboration because you have to move the data. Otherwise, you end up with small islands of storage and compute without the economy and advantages of aggregating data.”
An advanced HPC network also serves research teams at Montana State University as well. In one case, an MSU electrical and computing engineer researcher was able to share hundreds of gigabytes of data with colleagues at the University of California, San Diego to conduct collaborative research on marmoset vocalizations. Their combined studies could develop auto-signal detecting algorithms that improve human hearing aids. To further their reach, MSU connected the Montana-based researchers with 500 students at a Family Science Night. This combination of research, instruction and outreach created opportunities beyond the lab, all of which were made possible by overcoming data islands.
Yockel also noted that upgrading equipment can facilitate a tenfold increase in data, considering instruments and workstations have an average 10-year lifecycle and outdated systems can slow processes. While this is certainly positive, it also means that the workload for system networks is increased tenfold, thereby limiting the impact of the new HPC investment. HPC managed services teams take a holistic analysis of your HPC infrastructure and needs before making costly investment decisions. Academic research teams can then feel confident that they are taking the appropriate steps in refreshing instruments, workstations and operating systems and increasing computational efficiency. Ongoing system monitoring and security support can also help universities protect the intellectual property of their researchers.
Science and biotech
Advanced computing is a necessary tool in scientific research environments and often plays a prominent role in accelerating the rate at which researchers can discover groundbreaking solutions for society’s most pressing challenges.
The medical applications of HPC drive research to better understand the diseases and conditions that plague patients, as well as develop the technology, devices, equipment and therapies to address those challenges. For instance, the Centers for Disease Control use a supercomputer to better understand the hepatitis C virus, which is one of the major causes of liver disease. By developing a detailed model of the virus, the researchers paved the way for new therapies. Researchers at The Mary Bird Perkins Cancer Center in Baton Rouge, Louisiana used supercomputer simulations to conduct clinical trials. Not only did the HPC simulation save the team more than $12 million in research costs, but the results from the trial also helped improve success rates for long-term, advanced cancer care. The researchers made headway in lowering the incidence of second cancers in children who received radiation therapy.
Along with conducting original research, data scientists use computing solutions that already exist to develop more advanced systems that can facilitate more effective applications of the technology. For example, the National Nuclear Security Administration and the Office of Science - both of which are part of the U.S. Department of Energy - launched a collaborative project dedicated to elevating HPC. The goal is to develop an exascale ecosystem, which performs billions of calculations per second and features advanced simulations and modeling solutions. Led by senior scientists, project management experts and engineers from six of the largest DOE national laboratories, the Exascale Computing Project aims to develop breakthrough modeling and simulation systems that can address the most critical challenges in scientific discovery, energy assurance, economic competitiveness and national security by 2021.
The Grand View Research report noted that programs like the Exascale Computing Project will make significant contributions to the growth and advancement of the HPC market in the coming years. Companies that leverage HPC managed services to augment the work internal HPC and IT teams are doing to elevate their HPC systems are able to be more competitive, agile, and effective.
The acceleration and enhancement of analysis abilities are among the most effective applications of HPC solutions in engineering. HPC can run computer simulations, stress and strain analysis, heat thermal examinations and computational fluid dynamics with much greater speed than average workstations. Multiphysics simulation, for instance, requires time-consuming and complex calculations. Rather than engineers tackling the equations, HPC systems can complete the calculations to speed up the process. They can also support engineers with research, complex computing architectures, data processing, code optimization, application execution and large data transfers.
A subset of electronics engineering, semiconductor design is largely reliant on HPC-enabled solutions, especially due to the increasing demand for smart electronic devices. Advanced simulation and modeling tools allow engineers to test more design iterations in less time, which improves product designs while also decreasing time to market in a highly competitive environment.
Engineers also utilize HPC solutions for more advanced generative design projects. For instance, Airbus used generative design to reimagine commercial plane designs. In one instance, the company determined that it was possible to engineer compartment partitions that were lighter than the current ones, but also much stronger. Getting to the conclusion required creating a multitude of iterations of the product, which is where HPC expedited the process.
Arup, an international engineering and design firm that provides services for building and infrastructure projects, uses cloud-computing services to run applications for seismic engineering and building-structure evaluation. HPC reduced the time required to run the necessary structural analysis, allowing Arup to conduct several large-scale projects at the same time. High-speed storage also supports the complex structural calculations.
HPC managed services allow engineering firms to complete scalable, automated and cost-effective projects. HPC solution providers also support teams with system maintenance and operational security, ensuring firms can avoid project delays and costly downtime.
In the manufacturing industry, HPC applications span product research and development, supply chain management and operations. Modeling, simulation and data analysis for industrial processes can especially reduce product costs and accelerate time to market. Research from MarketsandMarkets noted that manufacturers need HPC solutions to speed access to data and improve computing speed, efficiency and overall performance, which will drive significant growth in the HPC market in the next few years.
Accelerated big data analysis is also of high value to manufacturers. HPC-enabled systems constantly collect and analyze data, which informs real-time adjustments to tools and processes within the manufacturing flow. This can improve product quality, shorten time to market and boost the company’s competitive edge.
The DOE Advanced Manufacturing Office launched a dedicated High Performance Computing for Manufacturing program in 2015, which connects U.S. manufacturers with national laboratories. The DOE’s National Labs house highly advanced HPC resources - including some of the fastest supercomputers in the world - that can enable impactful industry research. The goal of HPC4Mfg, therefore, is to advance the technology through optimizing designs, predicting performance and reducing the number of testing cycles during the development stage.
A recent HPC4Mfg project focuses on integrated HPC modeling, simulation and visualization capabilities for steel manufacturing. Creating complex reactive flows or 3D simulations generally takes 30 days or more to complete, which limits efforts to increase the energy efficiency of blast furnaces. However, the DOE partnered the University of Purdue-Calumet and the Lawrence Livermore National Laboratory with an integrated steel mill so the researchers could transfer the existing codes to HPC clusters. Using HPC computers, they’re also running simulations under varying conditions to analyze blast furnace operations. If successful, there’s potential for their work with HPC to improve simulation resolution and times by a factor of 100, optimizing the blast furnace process enough to save the iron and steel industry $80 million each year.
With increasingly affordable and accessible HPC systems and services, manufacturers of all sizes and specialties can benefit from advanced computing power. For instance, small manufacturers and component designers can use HPC modeling and simulations to execute more efficient product-design testing in research and development facilities, eliminating the cost and time associated with creating several physical prototypes. No matter the size of the manufacturing company, HPC systems and services can make a significant impact through resolving operational challenges and increasing efficiencies.
Oil and gas
Oil and gas companies have to manage and store massive amounts of seismic data, plus analyze the information to infer optimal drilling locations. HPC solutions can deliver those real-time analytics, implementing a data processing speed that allows companies to drill new wells before their competitors get there. More than identifying the locations, the additional computing power can aid in reservoir and basin modeling, optimizing production, minimizing environmental risks and increasing operational safety.
This need to process so much data led Devon Energy Corporation to use an HPC application for fast and accurate seismic analysis. It’s also why BP, one of the major companies in the sector, has an entire team of seismic researchers and computer scientists at a dedicated Center for High-Performance Computing. BP’s computational abilities led to the discovery of an estimated 200 million barrels of oil reserves in a drill field in the Gulf of Mexico. The Italian oil and gas company Eni is said to own the most powerful supercomputer in commercial use, with the ability to simulate 15 years of oil reservoir product in about 28 minutes.
George Milner is a Solutions Architect for Samsung SDS' HPC Managed Services.