Placeholder Image

字幕表 動画を再生する

  • Hi.

  • I'm Megan [? Arau. ?] Today I will be showing you the Deep

  • Learning SDK from Intel.

  • Data scientists and software developers

  • can use the Deep Learning SDK to simplify installation, easily

  • prepare models using popular deep learning

  • frameworks on Intel hardware, and optimize

  • performance for training and inference

  • on Intel architecture.

  • The Intel deep learning SDK is a free set

  • of tools to develop, train, and deploy deep learning solution.

  • In today's presentation, you will see a live demonstration

  • of the DL SDK training tool and learn how to visually set-up,

  • and [? queue, ?] and train datasets using deep learning

  • typologies like LeNet, AlexNet, and GoogLeNet on frameworks

  • like [? Cafe ?] optimized for performance on Intel

  • architecture.

  • Now let's take a look at the training tool.

  • You can download the DL SDK installer for Windows or Mac

  • from the link provided in the description below,

  • and install it using the instructions from the user

  • guide.

  • Launch the tool on your host machine using the IP address

  • and port of the server on which the DL SDK is installed.

  • Once launched, the home screen looks like this,

  • and it provides the ability to upload images

  • and create and train a model.

  • Let's take a look at an example.

  • I will use the MNIST dataset for the demonstration.

  • Before uploading the data set, make sure

  • that your [? guiding ?] structure

  • has the labels at the highest level,

  • with data corresponding to each label with them.

  • Now the first step is to upload the raw data.

  • Go to the upload step, create a supporter

  • in the route plan provided, select your data, file

  • and upload.

  • Once uploaded, you can copy the source [? stat ?] for your data

  • set from the history data.

  • Now, let's create a data set.

  • Proved a name for the data set, a brief description,

  • and choose the percentage of data up from the raw data

  • set that you would like to use for training and validation.

  • The MNIST dataset has had over 60,000 items.

  • You can experiment this by varying the percentages.

  • The MNIST dataset images are all 28 by 28 grayscale.

  • So make sure you select the right option in the image

  • processing tab.

  • If using color images for use with topologies

  • like AlexNet and GoogLeNet, the processing options

  • must be set accordingly.

  • Check out the user guide to learn more

  • about the other processing options.

  • Now go to the database option stat

  • and choose to database packet and the image encoding.

  • Now you're ready to create the data set.

  • Once successful you can visualize the number of data

  • items in each label for the training and the validation

  • data set.

  • Let's now go to the model stat, select the right topology,

  • and train the model.

  • Leonard is one of the topologies that

  • works best for handwritten digit recognition, so we select it.

  • AlexNet and GoogLeNet work best for color images

  • like [? C410 ?] and [INAUDIBLE].

  • You can also edit the built-in topologies using the new custom

  • topology feature.

  • Edit the topology file in the text field and save it.

  • It's now ready to use to train model.

  • Additional data transformation capabilities

  • exist within the DL SDK to create a digital data set.

  • We do not cover them here, but you can learn more

  • from the use guide.

  • Next, we move to the parameter selection tab.

  • Based on the topology chosen, the hyper parameters

  • are all, by default, set to the optimal values.

  • But you can experiment by changing some of these values

  • to see how the accuracy and the loss function change.

  • Now, run the model.

  • After all 50 [INAUDIBLE], we see that 100% training accuracy

  • and approximately about 98% validation

  • accuracy is achieved.

  • You can validate the model by going to the testing tab, input

  • a random digit that's not part of validation data set

  • to see what the model spits out as potential [INAUDIBLE].

  • For each label category, you can also see the number of hits

  • and misses.

  • After training is complete, the set

  • of all Cafe files of the train model

  • can be downloaded and used on the deployment platform.

  • For example, a smart camera or other mobile platform.

  • The real-time data [? effect ?] to the model on the deployment

  • platform can then predict an outcome

  • based on the train model.

  • Thanks for watching.

  • You can learn more about DL SDK on IDZ

  • and download the two from the link below.

  • Make sure to like and share this video,

  • and subscribe to the Intel software channel.

Hi.

字幕と単語

動画の操作 ここで「動画」の調整と「字幕」の表示を設定することができます

B1 中級

インテル® ディープラーニング SDK のご利用を開始します。 (Get Started with the Intel Deep Learning SDK | Intel Software)

  • 15 5
    alex に公開 2021 年 01 月 14 日
動画の中の単語