Placeholder Image

字幕表 動画を再生する

  • Hello everyone

  • My name is Malcolm

  • Today

  • I'm very happy to be here to introduce

  • IEI's new AI inference acceleration card

  • Mustang-V100-MX8

  • V stands for VPU

  • MX8 means there are

  • 8 Intel® MovidiusMyriadMA2485

  • inside this PCIe card

  • With Intel OpenVINOtoolkit

  • You can surf on deep learning inference acceleration

  • by Mustang-V100-MX8

  • Before we introduce the new product

  • Let's talk about why IEI designs this new product

  • As we know

  • IEI is a leading company of hardware design

  • and QNAP is one of IEI group companies

  • that has strong software capability

  • Due to the AI chip capability is getting stronger

  • and AI algorithm is getting higher accuracy than ever

  • we expect the AI demand

  • will increase dramatically in the future

  • That's why IEI cooperates with Intel

  • to design AI acceleration cards

  • and in the meanwhile

  • QNAP invests lots of resources to focus on AI industry

  • From factory inspection

  • smart retail to medical assistance systems

  • there are more and more AI applications

  • in our everyday life

  • Actually, AI is not going to replace human at all

  • we use AI to assist human because it's never get fatigue

  • and will not affect it's judgment

  • by other undesired factors

  • Let's have some recap of AI process

  • In traditional machine learning

  • engineer and data scientist

  • have to define the feature of input image.

  • For example in this case

  • we have to create the feature of ears shape

  • mouse shapes and tails appearance manually

  • then it can predict the input image

  • Deep learning progress

  • We have to prepare the tagged training image

  • Use the suitable topology to train the model

  • The features will be created automatically by the algorithm

  • In this case

  • deep learning starts to extract edge in the first layer

  • get bigger region such as nose

  • ear

  • leg in deeper layer

  • and finally predict the input image

  • which means 90% dog in this case

  • The main differences between

  • traditional machine learning and deep learning are

  • machine learning have to highlight the feature manually

  • by engineer and data scientist

  • Deep learning will generate by topology itself

  • the second difference is the more correct data

  • will help deep learning to get higher accuracy

  • That's why in recent year deep learning methods

  • can almost reach human judgment

  • in the ImageNET contest

  • This page includes deep learning tool frameworks

  • Above OS

  • there is framework for deep learning

  • such as TensorFlow

  • Caffe

  • and MXNet

  • Then use the suitable topology

  • to train deep learning model

  • Let's say for Image classification task

  • you may select AlexNet

  • GoogleNet

  • For object detection task

  • you may choose SSD and YOLO

  • Deep learning has two sections

  • one is training

  • another is inference

  • The tag images generate trained model

  • by a desired topology

  • Then output the trained model into inference machine

  • to predict the result

  • The technology of training is almost well-established

  • in today's industry

  • but how to complete the inference process

  • is the most important task today

  • What is OpenVINO toolkit

  • OpenVINO is short of Open Visual Inference

  • & Neural Network Optimization

  • It's an Intel open source SDK

  • which can convert popular frameworks into

  • Intel acceleration hardware

  • The heterogeneous feature allows it

  • to work in different acceleration platform

  • such as CPU, GPU, FPGA, VPU

  • It also includes model optimizer

  • to convert pre-trained model

  • from different frameworks to Intel format.

  • Inference engine is a C++ API

  • by which you can call API for inference applications

  • Besides it also has optimized OpenCV

  • and Intel® Media SDK to do the code works.

  • Here is the OpenVINOtoolkit workflow

  • In the first step it converts the frameworks

  • by model optimizer

  • Then start by the left top corner video stream

  • Because all video stream was encoded

  • it needs to be decoded by Intel media SDK

  • Then it needs some image preprocess

  • to remove background

  • or emphasize the image

  • by morphology methods in OpenCV

  • then call inference engine to predict the input image

  • After the prediction

  • it will need image post process

  • which means use OpenCV to highlight the object you detect

  • or add some text on the detect result

  • then the final step is to encode the video

  • then send it to other server.

  • OpenVINOtoolkit includes many examples

  • and pre-trained models

  • such as object recognition

  • image classification

  • age-gender recognition

  • vehicle recognition

  • Which you can download and get familiar

  • with the OpenVINO toolkit interface

  • from Intel official website

  • In hospital there are many data

  • that can be analyzed by AI methods

  • Such as OCT and MRU images

  • and other physical data from patients

  • Let's take a look by one example.

  • Macular degeneration would happen in senior citizens

  • Once you found you have vision distortion

  • it might be too late for medical treatment

  • In this case, it uses ResNET

  • to train 22000 tag OCT images

  • by medical experts

  • And it can predict wet

  • dry, normal macular condition by AI method.

  • We can see from the left hand side picture

  • it is the deep learning application

  • without OpenVINO toolkit

  • the frame rate is about 1.5 fps.

  • In the right hand side picture, using OpenVINO toolkit

  • the performance is 28 fps

  • which is almost 20 times increasing in this application.

  • Here is the IEI Mustang acceleration card series

  • Including CPU FPGA and VPU, VPU

  • VPU means Mustang-V100-MX8

  • They are all based on OpenVINO toolkit

  • to implement on deep learning inference applications

  • The features of Mustang-V100-XM8

  • It's a very compact PCIe card

  • which has half-length single slot dimension

  • The power consumption is 30 W

  • which is extremely low

  • 8 Intel® MovidiusMyriad™ X VPU inside

  • provide powerful computation capability

  • It's ideal for edge computation.

  • Other features are

  • wide temperature rang for operating in

  • 0~55 degree Celsius

  • and supporting multiple cards

  • It can also support popular topologies

  • such as AlexNet, GoogleNet, SSD and YOLO

  • another great feature

  • Mustang-V100-MX8 is a

  • decentralized computing platform

  • It can distribute difference VPU for different video input

  • and even different topology for each VPU

  • Which has very high flexibility for your applications

  • Mustang-V100-MX8 supports topology

  • such as AlexNet, GoogleNet

  • It's ideal for image classification

  • SSD, YOLO is suitable for object detection and applications

  • Here is a comparison table of

  • FPGA and VPU acceleration cards

  • which can help user to choose

  • what's the ideal card for their applications

  • VPU is an ASIC

  • which has less flexibility compared to FPGA

  • But with it's extremely low power consumption

  • and high performance

  • it's very suitable for edge device inference systems

  • Let's introduce IEI's AI acceleration cards roadmap

  • CPU and FPGA acceleration cards are already launched

  • and VPU acceleration card will launch in December

  • more and more SKU

  • such as mini PCIe

  • and M.2 acceleration cards interface will be ready soon.

  • Here we introduce an ideal

  • IEI system for AI deep learning

  • FLEX-BX200 is a 2U compact chassis

  • with rich I/O and can connect to FLEX PPC

  • to become a IP 66

  • high level water and dust proof system

  • which is ideal for the environment of traditional industry

  • TANK AIOT development kit

  • is an Intel proved OPENVINO toolkit platform

  • with pre-installed OpenVINO toolkit

  • It's an OpenVINO ready kit

  • it can develop your

  • deep learning inference applications

  • with IEI VPU FPGA acceleration cards right away

  • Here is an example

  • In this demo

  • we are using TANK-AIOT Development Kit

  • and the Mustang-V100-MX8 with OpenVINO toolkit

  • to run an Intel pre-trained model of vehicle classification

  • Let's start the demonstration

  • Here we have TANK-AIOT Development Kit

  • combining with our Mustang-V100-MX8

  • to process the OpenVINO pre-trained model

  • about the vehicle recognition

  • In this demonstration

  • it's using GoogleNET and YOLO to do the car detection

  • and vehicle model recognition

  • so you can see from the laptop corner

  • the frame rate is around 190 and 200

  • which means its car computation capability is extremely high

  • Mustang-V100-MX8 has the features about

  • very low consumption and also very compact size

  • which is an ideal acceleration card

  • for your edge computation device

  • That's today's demonstration

  • Mustang-V100-MX8 is an ideal acceleration card

  • for the AI deep learning applications

  • Hope you can understand more by today's introduction

  • and the demonstrations

  • If you have more question

  • please contact us

  • or scan the QR code to get more detail

  • Thank you. Bye.

Hello everyone

字幕と単語

B1 中級

IEI Deep Learning Inference Acceleration Card

  • 180 0
    alex   に公開 2019 年 04 月 27 日
動画の中の単語

前のバージョンに戻す