An open source toolkit that makes it easier to write once, deploy anywhere.
Deploy High-Performance, Deep Learning Inference
A new version of the Intel® Distribution of OpenVINO™ toolkit is now available. The 2023.0 release makes it easier for developers everywhere to start innovating. This new release empowers developers with exciting new features, performance enhancements, increased model support, more device portability, and higher inferencing performance with fewer code changes
Sign Up for OpenVINO Toolkit News
Keep up-to-date on the latest product releases, news, and tips.
How It Works
Convert and optimize models trained using popular frameworks like TensorFlow*, PyTorch*, and Caffe*. Deploy across a mix of Intel hardware and environments, on-premise and on-device, in the browser, or in the cloud.