Note
Go to the end to download the full example code
Trace applied transforms#
Sometimes we would like to see which transform was applied to a certain batch
during training. This can be done in TorchIO using
torchio.utils.history_collate()
for the data loader. The transforms
history can be saved during training to check what was applied.
Applied transforms:
[ToCanonical(),
Gamma(gamma={'t1': [0.8018917031404817]}),
RescaleIntensity(out_min_max=(-1, 1), percentiles=(0, 100), masking_method=None, in_min_max=(0.0, 286.26114))]
Composed transform to reproduce history:
Compose([ToCanonical(), Gamma(gamma={'t1': [0.8018917031404817]}), RescaleIntensity(out_min_max=(-1, 1), percentiles=(0, 100), masking_method=None, in_min_max=(0.0, 286.26114))])
Composed transform to invert applied transforms when possible:
/home/user/documentation/docs/torchio/repository/src/torchio/data/subject.py:197: RuntimeWarning: Skipping ToCanonical as it is not invertible
inverse_transform = history_transform.inverse(warn=warn)
Compose([RescaleIntensity(out_min_max=(-1, 1), percentiles=(0, 100), masking_method=None, in_min_max=(0.0, 286.26114), invert=True), Gamma(gamma={'t1': [0.8018917031404817]}, invert=True)])
Transforms applied to subjects in batch:
[[ToCanonical(),
Gamma(gamma={'t1': [1.1259200934274376]}),
Blur(std={'t1': tensor([0.5645, 1.3632, 1.8304])}),
Flip(axes=(0,)),
RescaleIntensity(out_min_max=(-1, 1), percentiles=(0, 100), masking_method=None, in_min_max=(0.0, 286.26114))],
[ToCanonical(),
Blur(std={'t1': tensor([0.5397, 0.3014, 0.0634])}),
RescaleIntensity(out_min_max=(-1, 1), percentiles=(0, 100), masking_method=None, in_min_max=(0.0, 286.26114))],
[ToCanonical(),
Gamma(gamma={'t1': [0.8567072622705179]}),
Flip(axes=(0,)),
RescaleIntensity(out_min_max=(-1, 1), percentiles=(0, 100), masking_method=None, in_min_max=(0.0, 286.26114))],
[ToCanonical(),
Gamma(gamma={'t1': [0.7924771084926655]}),
Blur(std={'t1': tensor([1.4525, 1.4022, 0.4076])}),
Flip(axes=(0,)),
RescaleIntensity(out_min_max=(-1, 1), percentiles=(0, 100), masking_method=None, in_min_max=(0.0, 286.26114))]]
import pprint
import matplotlib.pyplot as plt
import torch
import torchio as tio
torch.manual_seed(0)
batch_size = 4
subject = tio.datasets.FPG()
subject.remove_image('seg')
subjects = 4 * [subject]
transform = tio.Compose(
(
tio.ToCanonical(),
tio.RandomGamma(p=0.75),
tio.RandomBlur(p=0.5),
tio.RandomFlip(),
tio.RescaleIntensity(out_min_max=(-1, 1)),
)
)
dataset = tio.SubjectsDataset(subjects, transform=transform)
transformed = dataset[0]
print('Applied transforms:') # noqa: T201
pprint.pprint(transformed.history) # noqa: T203
print('\nComposed transform to reproduce history:') # noqa: T201
print(transformed.get_composed_history()) # noqa: T201
print(
'\nComposed transform to invert applied transforms when possible:'
) # noqa: T201, B950
print(transformed.get_inverse_transform(ignore_intensity=False)) # noqa: T201
loader = torch.utils.data.DataLoader(
dataset,
batch_size=batch_size,
collate_fn=tio.utils.history_collate,
)
batch = tio.utils.get_first_item(loader)
print('\nTransforms applied to subjects in batch:') # noqa: T201
pprint.pprint(batch[tio.HISTORY]) # noqa: T203
for i in range(batch_size):
tensor = batch['t1'][tio.DATA][i]
affine = batch['t1'][tio.AFFINE][i]
image = tio.ScalarImage(tensor=tensor, affine=affine)
image.plot(show=False)
history = batch[tio.HISTORY][i]
title = ', '.join(t.name for t in history)
plt.suptitle(title)
plt.tight_layout()
plt.show()
Total running time of the script: (0 minutes 4.438 seconds)