Inspur HF18000G5 high-end all-flash storage is designed for medium/large-sized enterprises and is oriented to structured and unstructured.
LEARN MORELearn what's at the heart of new hybrid cloud platforms.Extend the cloud experience across your business in away that‘s open flexible, and hybrid by design.
Read the INSPUR paperNovember 18-21 | Colorado Convention Centre, Denver, USA
Thinking about attending SC19?Register to get a free pass and join Inspur at the conference for all things HPC.Inspur AIStation Empowers Efficient GPU Resource Sharing
AIStation is an Inspur-developed AI development platform specifically designed to deal with these issues by offering an easy to set up and refined GPU resource scheduling system.
Inspur joined with Xishuangbanna National Nature Reserve to develop an extensive technology system for the conservation of some 300 Asian elephants in Yunnan, China.
Uncovering the ancient past with Inspur AI and biomolecular archaeology
Inspur teams up with DNA lab to trace the origin of human civilization
Archaeology and AI Unlock the Secrets of Our Ancient History
With the help of today’s intelligent computing, researchers are now more easily able to find out more about our world from critically examining the artifacts of the past.
By Arthur Kang, SPEC OSSC Member / OSG ML Chair, Performance Architect, Inspur Information
Training Yuan 1.0 – a Massive Chinese Language Model with 245.7 Billion Parameters
The advanced language capabilities of Yuan 1.0 necessitates a high number of parameters, which brings many challenges in model training and deployment. This article focuses on the computing challenges of Yuan 1.0 and the training methods used.
Performance Evaluation of Weather and Climate Prediction Applications on Intel's Ice Lake Processor
The amazing enhancements of Intel's new third-generation Xeon Scalable processors (Ice Lake),
A Deep Analysis on Optimal Single Server Performance of ResNet50 in MLPerf Training Benchmarks
lots of factors can influence the training performance. In this article, we use the ResNet50 model from MLPerf Training v1.0 benchmarks as an example to describe how to improve training speed with hardware and software optimization