TorchFusion Utils

A pytorch helper library for Mixed Precision Training, Initialization, Metrics and More Utilities to simplify training

TorchFusion Utils is a subset of the TorchFusion project by DeepQuest AI .

It is a research centered framework aimed at providing advance functionalities to simplify the training of Pytorch models without sacrificing transparency and control of your training flow. Unlike more high level frameworks, TorchFusion Utils can be easily plugged in to any standard Pytorch code. It is easy to install and provides a number of key features

Installation

TorchFusion-Utils is an helper for Pytorch, hence, first you need to Install Pytorch from pytorch.org

Install TorchFusion-Utils

pip3 install torchfusion-utils --upgrade

Features

Mixed Precision Training

Mixed Precision Training, Paulius et al, 2017, is a technique for accelerating the training and inference of deep learning models on Nvidia GPUs by taking advantage of Tensor Cores on modern Nvidia GPUs.

TorchFusion Utils enables you to take advantage of over 100 teraflops performance on a single Nvidia V100, however, using the default single precision mode of pytorch only gives you access to 14 teraflops.

Using mixed precision training in pytorch is a breeze with TorchFusion-Utils, all it takes is three lines of code!

from torchfusion_utils.fp16 import convertToFP16
#convert your model and optimizer to mixed precision mode
model, optim = convertToFP16(model,optim)
#in your batch loop, replace loss.backward with optim.backward(loss)
optim.backward(loss)

Saving/Exporting your Mixed Precision Model

During or after training, you need to save your model or convert it to ONNX or Libtorch. Before doing this, you should convert your model back to full fp32 mode. Example below.

from torchfusion_utils.fp16 import convertToFP32
import torch
#create a fp32 clone
full_model = convertToFP32(model)
#save your model weights
torch.save(full_model,"model.pth")
#export to onnx
dummy_input = torch.FloatTensor(1,3,224,224)
torch.onnx.export(full_model,dummy_input,"model.onnx")

Read the cifar10 example for practical usage

With the above three lines of code, your memory usage is effectively lowered by half, enabling you to train with larger batch sizes and train larger models that would otherwise not fit into your gpu memory. This combined with high speed up in training, would effectively speedup your research and lower the cost of training deep learning models. Note that, your model accuracy is not affected by this speedup.

INITIALIZERS

Proper intialization is key to fast convergence of deep learning models. The initializers package makes intitializing pytorch modules very easy. It supports advance initializers such as Kaiming Initialization, Xavier Initialization and more. It also makes it simple for you to easily add new custom initializers.

Below is an example, showing initialization of resnet50

from torchfusion_utils.initializers import *
import torch.nn as nn
from torchvision.models import resnet50
model = resnet50()
#initializer conv layers
kaiming_normal_init(model,types=[nn.Conv2d],category="weight")
#intializer Linear Layers
xavier_normal_init(model,types=[nn.Linear])

Read the cifar10 example for practical usage

Metrics

The metric package provides you with a set of standard metrics covering classification and regression tasks, it also makes it easy for you to define your own custom metrics.

from torchfusion_utils.metrics import Accuracy
top5_acc = Accuracy(topk=5)
#sample evaluation loop
for i,(x,y) in enumerate(data_loader):
predictions = model(x)
top5_acc.update(predictions,y)
print("Top 5 Acc: ",top5_accc.getValue())

Read the cifar10 example for practical usage