字幕表 動画を再生する 英語字幕をプリント Hi. My name is Sergey Maidanov. In this video, we'll be talking about Python and how it can help accelerate technical computing and machine learning. I will also highlight some key features of Intel distribution for Python. Stay with me to learn more. Python is known as a popular and powerful language used across various application domains. Being an interpreted language, it has inherited performance constraints limiting its usage to environments not very demanding for performance. Python's low efficiency in production environments creates an organizational challenge when companies and institutions need to have two distinct things. The one that prototypes in numerical model in Python and the other, that it writes it in a different language to deploy it in production. Our team's mission at Intel is to bring Python performance up when a prototype numerical or machine learning model can be deployed in production without the need to rewrite it in a different programming language. Since our target customers [INAUDIBLE] with development productivity, we aim to build performance on Intel architecture out-of-the-box with relatively small effort on the user side. Let me briefly outline what Intel Python is and how it brings performance efficiency. We deliver pre-built Python along with the most popular packages for numerical computing and data science, such as NumPy, SciPy, and Scikit-learn All are linked with Intel's performance libraries such as MKL and DAAL for near-to-native code speeds. Intel Python is also accompanied with productivity tools such as Jupyter notebooks and [INAUDIBLE].. It also shipped with Conda and PIP package managers that allow you to seamlessly install any other package available in the community. For machine learning, our distribution comes with optimized deep software, Caffe and Theano, as well as classic machines learning libraries like, Scikit-learn and pyDAAL. We also package Cython and Numba for tuning performance hotspots to native speeds. And for [INAUDIBLE] performance, we ship MPI for Py accelerated with Intel MPI. Python distribution is available in a variety of options, so don't forget to follow the links below to access it. Let me illustrate the out-of-the-box performance on the example of Black-Scholes formal application being run in prototype environment on Intel Core-based processor and in production on Intel Xeon and Xeon Phi servers. The bars show performance that we can attain with the stock NumPy, illustrated by the dark blue bars, and with NumPy shipped with Intel Python, represented by the light bars. You can see that Intel's NumPy delivers significantly better performance on Intel Core-based system. But it scales on relatively small problem sizes shown on the horizontal axis as the total number of options to price. This is typical for prototype environment. You build and test your model on relatively small problem first, and then deploy in production to run it in full scale on powerful CPUs. This graph shows how the same application scales in production on the Intel Xeon-based server. You can see that Intel Python delivers much better performance and scales really well to large problems. Next, this graph shows how the same application scales on Intel Xeon Phi-based system. You can see that Intel Python delivers even better performance on these highly parallel workload that scales well for large enough problems. Besides, Intel Python engineering, we work with all major Python vendors and the open source community to make these optimizations broadly accessible. And we encourage you to take advantage of Intel Python's exceptional performance in your own numerical and machine learning projects. Every option to get Python is free for academic and commercial use, so don't forget to follow the links to access it. And thanks for watching.
B1 中級 米 インテル・ディストリビューション・フォー・パイソンのハイライトと概要|インテル・ソフトウェア (Intel Distribution for Python Highlights & Overview | Intel Software) 23 3 alex に公開 2021 年 01 月 14 日 シェア シェア 保存 報告 動画の中の単語