Metrics

Metrics provides an easy to interprete numerical evaluation of the performance of your models. TorchFusion Utils provides a number of metrics out of the box and makes it easy for your to create yours.

Using The Metrics

from torchfusion_utils.metrics import *
#training loop
def train():
top1_acc = Accuracy()
top5_acc = Accuracy(topK=5)
for e in range(num_epochs):
#reset metrics at the start of each epoch
top1_acc.reset()
top5_acc.reset()
for i,(x,y) in enumerate(dataloader):
predictions = model(x)
#update the metrics
top1_acc.update(predictions,y)
top5_acc.update(predictions,y)
#log your metrics
print("Top1 Acc: {} , Top5 Acc: {}".format(top1_acc.getValue(),top5_acc.getValue())}

See the cifar10 example for Practical Usage of the metrics package

Creating Custom Metrics

When working with different kinds of neural networks, you sometimes need custom metrics designed for a specific task. TorchFusion utils provides a simple way to create managed custom metrics that you can consume in the same way you use the provided metrics.

To illustrate, here, we shall create the Mean Squared Error metric.

from torchfusion_utils.metrics import Metric
import torch
class MSE(Metric):
def __init__(self,name="MSE"):
super(MSE,self).__init__(name)
def __compute__(self,prediction,label):
difference = torch.sum((prediction - label) ** 2)
return difference

The base metric class takes care of averaging the difference over all examples

Below is a sample usage of the custom metric we just created

predictions = torch.tensor([2.0,1.5,1.0,3.0,6.0])
target = torch.tensor([1.9,1.0,3.0,2.8,5.9])
mse = MSE()
mse.update(predictions,target)
print(mse.getValue())