Huawei launches world’s most powerful AI processor

News Processor

Huawei officially launched the world’s most powerful AI processor, the Ascend 910 as well as an all-scenario AI computing framework, MindSpore. The Ascend 910 is a new AI processor that belongs to Huawei’s series of Ascend-Max chipsets.

“We have been making steady progress since we announced our AI strategy in October last year,” said Eric Xu, Huawei’s Rotating Chairman. “Everything is moving forward according to plan, from R&D to product launch. We promised a full-stack, all-scenario AI portfolio. And today we delivered, with the release of Ascend 910 and MindSpore. This also marks a new stage in Huawei’s AI strategy.”

For half-precision floating point (FP16) operations, Ascend 910 delivers 256 TeraFLOPS. For integer precision calculations (INT8), it delivers 512 TeraOPS. Despite its unrivaled performance, Ascend 910’s max power consumption is only 310W, much lower than its planned specs (350W).

“Ascend 910 performs much better than we expected,” said Xu. “Without a doubt, it has more computing power than any other AI processor in the world.”

Ascend 910 is used for AI model training. In a typical training session based on ResNet-50, the combination of Ascend 910 and MindSpore is about two times faster at training AI models than other mainstream training cards using TensorFlow.

Huawei also launched MindSpore today, an AI computing framework that supports development for AI applications in all scenarios. AI computing frameworks are critical to making AI application development easier, making AI applications more pervasive and accessible, and ensuring privacy protection.

MindSpore marks significant progress towards these goals. As privacy protection grows more important than ever, support for all scenarios is essential for enabling secure, pervasive AI. This is a key component in the MindSpore framework, which can readily adapt to different deployment needs. Resource budget environments can be big or simple as needed – MindSpore supports them all.

MindSpore helps ensure user privacy because it only deals with gradient and model information that has already been processed. It doesn’t process the data itself, so private user data can be effectively protected even in cross-scenario environments. In addition, MindSpore has built-in model protection technology to ensure that models are secure and trustworthy.

The MindSpore AI framework is adaptable to all scenarios – across all devices, edge, and cloud environments – and provides on-demand cooperation between them. Its “AI Algorithm As Code” design concept allows developers to develop advanced AI applications with ease and train their models more quickly.

In a typical neural network for natural language processing (NLP), MindSpore has 20% fewer lines of core code than leading frameworks on the market, and it helps developers raise their efficiency by at least 50%.

Through framework innovation, as well as co-optimization of MindSpore and Ascend processors, Huawei’s solution can help developers more effectively address complex AI computing challenges and the need for a diverse range of computing power for different applications. This results in stronger performance and more efficient execution. In addition to Ascend processors, MindSpore also supports GPUs, CPUs, and other types of processors.

When introducing MindSpore, Xu emphasized Huawei’s commitment to helping build a more robust and vibrant AI ecosystem. “MindSpore will go open source in the first quarter of 2020. We want to drive broader AI adoption and help developers do what they do best.”

Huawei’s AI portfolio covers all deployment scenarios, including public cloud, private cloud, edge computing, IoT industry devices, and consumer devices. The portfolio is also full-stack: It includes the Ascend IP and chip series, chip enablement layer CANN, training and inference framework MindSpore, and application enablement platform called ModelArts.

Huawei defines AI as a new general purpose technology, like railroads and electricity in the 19th century, and cars, computers, and the Internet in the 20th century. The company believes that AI will be used in almost every sector of the economy.

According to Xu, AI is still in its early stages of development, and there are a number of gaps to close before AI can become a true general purpose technology. Huawei’s AI strategy is designed to bridge these gaps and speed up adoption on a global scale.

At Huawei Connect 2018, Huawei announced its AI strategy and full-stack, all-scenario AI portfolio, including the Ascend 310 AI processor and ModelArts that provides full-pipeline model production services.

Ascend 310 is Huawei’s first commercial AI System on a Chip (SoC) in the Ascend-Mini series. With a maximum power consumption of 8W, Ascend 310 delivers 16 TeraOPS in integer precision (INT8) and 8 TeraFLOPS in half precision (FP16), making it the most powerful AI SoC for edge computing. It also comes with a 16-channel FHD video decoder.

Since its launch, Ascend 310 has already seen wide adoption in a broad range of products and cloud services. For example, Huawei’s Mobile Data Center (MDC), which employs Ascend 310, has been used by many leading automakers in shuttle buses, new-energy vehicles, and autonomous driving.

The Ascend 310-powered Atlas series acceleration card and server are now part of dozens of industry solutions (e.g., smart transportation and smart grid) developed by dozens of partners.

Ascend 310 also enables Huawei Cloud services like image analysis, optical character recognition (OCR), and intelligent video analysis. There are more than 50 APIs for these services. At present, the number of API calls per day has exceeded 100 million, and this figure is estimated to hit 300 million by the end of 2019. More than 100 companies are using Ascend 310 to develop their own AI algorithms.

Huawei’s ModelArts provides model development services spanning the full pipeline, from data collection and model development to model training and deployment. At present, more than 30,000 developers are using ModelArts to handle 4,000+ training tasks per day (for a total of 32,000 training hours). Among these tasks, 85% are related to visual processing, 10% are for processing audio data, and 5% are related to machine learning.

Comments