字幕表 動画を再生する 英語字幕をプリント ♪ (intro music) ♪ At Google, we've been on a journey to make the training, deployment, managing, and scaling of machine-learned models as easy as possible. To that end, we're now delighted to say we've released TensorFlow 2.0. TensorFlow 2.0 has been driven by feedback from lots of folks in the community, from individual developers, to enterprises, to researchers, telling us that they want an easy-to-use framework that is both flexible and powerful and supports deployment to any platform. TensorFlow 2.0 provides a comprehensive ecosystem of tools for developers, enterprises, and researchers who want to push the state of the art of machine learning and build scalable ML-powered applications. With TensorFlow 2.0, we strive to make development of machine-learned applications much easier. With tight integration of Keras into TensorFlow, eager execution by default, and an emphasis on Pythonic function execution, instead of sessions, the goal is to make the experience of developing applications with TensorFlow 2.0 as familiar as possible for Python developers. Our drive to make simpler APIs does not come at the expense of giving you the flexibility to develop advanced customizations for your needs. We have invested heavily to create a more complete low-level API. So, for example, you can now export all ops that were used internally and provide inheritable interfaces for crucial concepts such as variables and checkpoints. This allows you to build on top of the internals to TensorFlow without having to rebuild TensorFlow. For example, creating your own optimizer, like you can see here. We have standardized on the Saved model file format to run models on a variety of runtimes, including the Cloud, Web, Browser, Node.js, Mobile and embedded systems. This allows you to not just run your model to a TensorFlow but deploy them to Web and Cloud with TensorFlow Extended. You can use them on mobile and embedded systems with TensorFlow Lite, and you can train and run them in the browser on Node.js with TensorFlow.js. With the distribution strategy API, you'll be able to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras's model.fit as well as with custom training loops. Multi-GPU support is available, and, of course, TensorFlow 2.0 also supports TensorRT for fast inference on GPUs. Check out the guide for more details. Another feedback item we heard was that making access to data easy would be a massive benefit. Much of the code that developers write is in managing and preparing their data. So we have expanded TensorFlow datasets, giving a standard interface to a variety of diverse datasets, including those containing images, text, video, and much more. While the traditional session-based programming model is still maintained, we recommend using regular Python with eager execution. The tf.function decorator can be used to convert your code into graphs which can then be executed remotely, serialized, and optimized for performance. This is complemented by autograph, which is built into tf.function, and it can convert regular Python control flow directly into TensorFlow control flow. And, of course, if you've used TensorFlow 1.x and you're looking for a migration guide to 2.0, we've published a guide which includes an automatic conversion script to help you get started. If you want to learn how to build applications using TensorFlow 2.0, take a look at the effective 2.0 guide and then try out our online courses that we've created together with the folks from deeplearning.ai and Udacity. For more about TensorFlow 2.0, including how to download and get started with coding machine-learned applications, check out the official TensorFlow site at tensorflow.org. ♪ (ending music) ♪
B1 中級 TensorFlow 2.0を発表 (Announcing TensorFlow 2.0) 3 0 林宜悉 に公開 2021 年 01 月 14 日 シェア シェア 保存 報告 動画の中の単語