PyTorch Adds TorchScript API
Written by Kay Ewbank   
Friday, 16 August 2019

PyTorch 1.2 has been released with a new TorchScript API offering fuller coverage of Python. The new release also has expanded ONNX export support and a standard nn.Transformer module.

PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. It aims to offer a replacement for NumPy that make use of the power of GPUs, while providing a deep learning research platform that provides maximum flexibility and speed.


The developers say that in this release, the open source ML framework takes a major step forward for production use with the addition of an improved TorchScript environment. The TorchScript compiler converts PyTorch models to a statically typed graph representation, providing a way to optimize and execute models in constrained environments where Python is not available. You can incrementally convert your model to TorchScript, mixing compiled code seamlessly with Python. TorchScript programs can be saved from a Python process and loaded in a process where there is no Python dependency.

In this release, TorchScript has significantly expanded support for the subset of Python used in PyTorch models and delivers a new, easier-to-use API for compiling your models to TorchScript.

Support for ONNX export has also been expanded. ONNX is the Open Neural Network eXchange format, an open format to represent deep learning models designed to make it easier for AI developers to move models between tools. This release of PyTorch adds full support to export ONNX Opset versions 7 to 10, and there's an enhancement to the constant folding pass to support Opset 10, the latest available version of ONNX. ScriptModule has also been improved including support for multiple outputs, tensor factories, and tuples as inputs and outputs.

A standard nn.Transformer module has been included in this release. This is designed for use with neural networks that transform a sequence of elements (words in a sentence, perhaps) into a different sequence. Such networks are often used for translations, and usually use an encoder and a decoder, along with an attention mechanism that looks at the input sequence and decides which parts are the most important.  The nn.transformer module relies entirely on an attention mechanism to draw global dependencies between input and output. It's based on the ideas put forward in a paper entitled  “Attention is All You Need”.

The final main improvement to this release is an updated set of Domain API libraries. These provide access to common datasets, models, and transforms that can be used to quickly create a baseline, and this release sees three updated DAPI libraries for text, audio, and vision.

The new version is available on GitHub.




More Information

PyTorch Website

PyThorch On GitHub

Related Articles

PyTorch Scholarship Challenge

Pyro Now On Watson Machine Learning

More Efficient Style Transfer Algorithm

ONNX For AI Model Interoperability

Microsoft Cognitive Toolkit Version 2.0

NVIDA Updates Free Deep Learning Software

TensorFlow - Googles Open Source AI And Computation Engine

AIGoes Open Source To The Tune Of $1 Billion 


To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.


Fluid Framework 2 Now Production Ready

Fluid Framework 2, Microsoft's development platform for collaborative ways to work with documents, is now production ready, according to Microsoft.

BusyBeaver(5) Is 47,176,870

The thing about the BusyBeaver function is that it is very easy to understand, but very difficult to compute. We now know its value up to 5, which isn't much progress for more than 50 years work.

More News

kotlin book



or email your comment to:

Last Updated ( Friday, 16 August 2019 )