QNAP Systems introduced two computing accelerator cards designed for AI deep learning inference, including the Mustang-V100 (VPU based) and the Mustang-F100 (FPGA based). Users can install these PCIe-based accelerator cards into Intel-based server/PC or QNAP NAS to tackle the demanding workloads of modern computer vision and AI applications in manufacturing, healthcare, smart retail, video surveillance, and more.
“Computing speed is a major aspect of the efficiency of AI application deployment,” said Dan Lin, Product Manager of QNAP, continuing “While the QNAP Mustang-V100 and Mustang-F100 accelerator cards are optimized for OpenVINO architecture and can extend workloads across Intel hardware with maximized performance, they can also be utilized with QNAP’s OpenVINO Workflow Consolidation Tool to fulfill computational acceleration for deep learning inference in the shortest time.”
Both the Mustang-V100 and Mustang-F100 provide economical acceleration solutions for AI inference, and they can also work with the OpenVINO toolkit to optimize inference workloads for image classification and computer vision. The OpenVINO toolkit, developed by Intel, helps to fast-track development of high-performance computer vision and deep learning into vision applications. It includes the Model Optimizer and Inference Engine, and can optimize pre-trained deep learning models (such as Caffe and TensorFlow) into an intermediate representation (IR), and then execute the inference engine across heterogeneous Intel hardware (such as CPU, GPU, FPGA and VPU).
As QNAP NAS evolves to support a wider range of applications (including surveillance, virtualization, and AI), the combination of large storage and PCIe expandability are advantageous for its usage potential in AI. QNAP has developed the OpenVINO Workflow Consolidation Tool (OWCT) that leverages Intel OpenVINO toolkit technology. When used with the QWCT, Intel-based QNAP NAS presents an ideal Inference Server solution to assist organizations in quickly building inference systems. AI developers can deploy trained models on a QNAP NAS for inference, and install the Mustang-V100 or Mustang-F100 accelerator card to achieve optimal performance for running inference.
QNAP NAS now supports Mustang-V100 and Mustang-F100 with the latest version of the QTS 4.4.0 operating system.