TLDR
Check out the app
.
Image augmentation is a common technique used when training computer vision models in order to generate artificial training data by transforming in your actual training data, for example, random rotations and shifts.
We created 4 new images of cats!
However these augmentations can often be a source of subtle bugs. For example here is an typical PyTorch transform pipeline:
1
2
3
4
5
6
7
from torchvision import transforms
transform = transforms.Compose([
transforms.RandomAffine(degrees=360, translate=(0.64, 0.98), scale=(0.81, 2.85), shear=(0.1, 0.5), fill=0, interpolation=InterpolationMode.NEAREST),
transforms.ColorJitter(brightness=0.62, contrast=0.3, saturation=0.44, hue=0.24),
transforms.RandomVerticalFlip(p=0.45)
])
Do you see anything obviously wrong with it?
Well, let’s see what happens when we actually try to apply these transformations to our cat.
Half of these don’t even look like cats!
As you can see the transformations were not properly tuned and resulted in a significant number of images being completely unrecognizable. If we train the model on this augmented dataset, it’s no longer learning what a cat looks like.
These sorts of bugs are tricky because no errors will be raised. Instead the result would be that the model will not perform as well on the un-augmented test dataset as it could have.
Introducing a PyTorch transforms visualizer
You can use this tool to develop and sanity check your transforms on actual images before using them in a training script. It supports all the transforms provided in the
package.
Check it out
!