字幕表 動画を再生する
Hi, I'm Priyanka from Intel.
In this video, we walk you through various components
of the OpenVino toolkit.
Deep learning inference occurs at the edge.
which is optimized for power and performance.
Developers can learn how to write optimized code
for different hardware accelerators.
However, achieving proficiency will take a long time.
To accelerate deep learning application development,
Intel has introduced the OpenVino Toolkit,
or the Open Virtual Inference and Neural Network Optimization
Toolkit.
This convolutional neural network based toolkit
is designed to increase performance, reduce power,
maximize hardware utilization, and reduce time to market
for computer vision solutions.
Now, I will dive into different components of the OpenVino
toolkit.
It includes tools for deep learning,
as well as for traditional computer vision.
Let's start with the deep learning deployment toolkit.
It is a cross-platform tool to accelerate deep
learning inference performance.
It includes model optimizer and inference engine.
The model optimizer takes pre-trained models
from various frameworks, such as Caffe, Tensorflow, MXnet
and converts them into an intermediate representation
that is IR files.
The inference engine takes these IR files as input,
and then using the unified API, it deploys the computer vision
application on different platforms,
like CPU, integrated GPU, Movidius compute
stick, or FPGs without making any code changes.
The OpenVino Toolkit also includes OpenCV and OpenVX
for traditional computer vision.
It includes optimized OpenCV libraries
for Intel architecture.
Additionally, a new Intel photography vision library
with face detection, recognition, blink detection,
and smile detection is also added.
The OpenVino Toolkit also includes
the media SDK to support hardware optimized
encode, decode, and pre-processing of the input
image or video.
It supports OpenCL to run the application
on various platforms and add custom layers.
In addition to all these components,
this package also gives you the Intel FPGA Deep Learning
Acceleration suite, which includes
precompiled bitstreams and the Intel FPGA SDK for OpenCL.
Let's now look at how you can utilize OpenVino Toolkit
components to develop an optimized computer vision
application.
First, the OpenVino Toolkit takes three trained models
from frameworks such as Caffe, Tensorflow, and MXnet.
Model optimizer imports these models
and converts them into intermediate formats
to be processed by the inference engine.
Then, based on your choice, the inference engine API
runs the application on various hardware types,
such as CPU, integrated GPU, Movidius Neural Compute Stick,
or FPGA.
OpenVino also supports the heterogeneous architecture
with fallback to a different device for custom
or unsupported layers.
It also features an asynchronous execution,
which allows you to perform other tasks while the hardware
accelerator is crunching the current frame.
The OpenVino Toolkit comes with included samples that
showcase basic functionality.
These samples use Intel pre-trained detection
and recognition models for various tasks,
from age and gender recognition to multiple object detection,
and even headpose estimates.
We also offer a model downloader to access
several public pre-trained inference models, such as SSD,
VGG, DenseNet, and SqueezeNet.
To learn how to install the OpenVino Toolkit,
follow the link provided to the installation page.
The product website has a growing set of resources
to help you ramp up faster, including documentation,
training, and a support forum.
Thanks for watching.
In the next video, we cover the model optimizer in detail.