TorchFusion Utils is a subset of the TorchFusion project by DeepQuest AI .
It is a research centered framework aimed at providing advance functionalities to simplify the training of Pytorch models without sacrificing transparency and control of your training flow. Unlike more high level frameworks, TorchFusion Utils can be easily plugged in to any standard Pytorch code. It is easy to install and provides a number of key features
TorchFusion-Utils is an helper for Pytorch, hence, first you need to Install Pytorch from pytorch.org
pip3 install torchfusion-utils --upgrade
Mixed Precision Training, Paulius et al, 2017, is a technique for accelerating the training and inference of deep learning models on Nvidia GPUs by taking advantage of Tensor Cores on modern Nvidia GPUs.
TorchFusion Utils enables you to take advantage of over 100 teraflops performance on a single Nvidia V100, however, using the default single precision mode of pytorch only gives you access to 14 teraflops.
Using mixed precision training in pytorch is a breeze with TorchFusion-Utils, all it takes is three lines of code!
from torchfusion_utils.fp16 import convertToFP16#convert your model and optimizer to mixed precision modemodel, optim = convertToFP16(model,optim)#in your batch loop, replace loss.backward with optim.backward(loss)optim.backward(loss)
During or after training, you need to save your model or convert it to ONNX or Libtorch. Before doing this, you should convert your model back to full fp32 mode. Example below.
from torchfusion_utils.fp16 import convertToFP32import torch#create a fp32 clonefull_model = convertToFP32(model)#save your model weightstorch.save(full_model,"model.pth")#export to onnxdummy_input = torch.FloatTensor(1,3,224,224)torch.onnx.export(full_model,dummy_input,"model.onnx")
With the above three lines of code, your memory usage is effectively lowered by half, enabling you to train with larger batch sizes and train larger models that would otherwise not fit into your gpu memory. This combined with high speed up in training, would effectively speedup your research and lower the cost of training deep learning models. Note that, your model accuracy is not affected by this speedup.
Proper intialization is key to fast convergence of deep learning models. The initializers package makes intitializing pytorch modules very easy. It supports advance initializers such as Kaiming Initialization, Xavier Initialization and more. It also makes it simple for you to easily add new custom initializers.
Below is an example, showing initialization of resnet50
from torchfusion_utils.initializers import *import torch.nn as nnfrom torchvision.models import resnet50model = resnet50()#initializer conv layerskaiming_normal_init(model,types=[nn.Conv2d],category="weight")#intializer Linear Layersxavier_normal_init(model,types=[nn.Linear])
The metric package provides you with a set of standard metrics covering classification and regression tasks, it also makes it easy for you to define your own custom metrics.
from torchfusion_utils.metrics import Accuracytop5_acc = Accuracy(topk=5)#sample evaluation loopfor i,(x,y) in enumerate(data_loader):predictions = model(x)top5_acc.update(predictions,y)print("Top 5 Acc: ",top5_accc.getValue())