Mixed Precision Training

The package torchfusion_utils.fp16 provides the functions neccessary to convert pytorch models to and from mixed precision mode.

Core to this are two main classes.

convertToFP16

This function takes a number of arguments allowing you to control the behavior of the converted model and optimizer.

  • model : This is the source model to be converted

  • optimizer : This is the source optimizer to be converted

  • loss_scale : This allows you to manually control the scaling of the loss, note that setting this value manually overrides automatic loss scaling and could potentially impact your accuracy.

  • dynamic_scale_args : This allows you control the parameters of the auto loss scaler, the defaults are good enough but you can always tune the parameters.

Usage examples

from torchfusion_utils.fp16 import convertToFP16
model, optimizer = convertToFP16(model,optimizer,loss_scale=1.0)

from torchfusion_utils.fp16 import convertToFP16
model, optimizer = convertToFP16(model,optimizer,
dynamic_scale_args={"scale_window":2000,"scale_factor":3.0,"init_scale":2**30})

convertToFP32

This function converts a model to standard 32 bit precision mode, this is especially useful when you need to save your model or convert it to ONNX or Libtorch. It is highly recommended to perform this conversion when exporting your model.

Usage example

from torchfusion_utils.fp16 import convertToFP32
import torch
#create a fp32 clone
full_model = convertToFP32(model)
#save your model weights
torch.save(full_model.state_dict(),"model.pth")
#export to onnx
dummy_input = torch.FloatTensor(1,3,224,224)
torch.onnx.export(full_model,dummy_input,"model.onnx")