= LovaszHingeLossFlat()
lov_hinge = torch.randn(4,1,128,128)
outp = torch.randint(0, 2, (4,1,128,128)) target
Custom losses for segmentation and object detection
Lovász-losses
xloss
xloss (logits, labels, ignore=None)
Cross entropy loss
flatten_probas
flatten_probas (probas, labels, ignore=None)
Flattens predictions in the batch
lovasz_softmax_flat
lovasz_softmax_flat (probas, labels, classes='present')
Multi-class Lovasz-Softmax loss probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1) labels: [P] Tensor, ground truth labels (between 0 and C - 1) classes: ‘all’ for all, ‘present’ for classes present in labels, or a list of classes to average.
lovasz_softmax
lovasz_softmax (probas, labels, classes='present', per_image=False, ignore=None)
Multi-class Lovasz-Softmax loss probas: [B, C, H, W] Variable, class probabilities at each prediction (between 0 and 1). Interpreted as binary (sigmoid) output with outputs of size [B, H, W]. labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1) classes: ‘all’ for all, ‘present’ for classes present in labels, or a list of classes to average. per_image: compute the loss per image instead of per batch ignore: void class labels
flatten_binary_scores
flatten_binary_scores (scores, labels, ignore=None)
Flattens predictions in the batch (binary case) Remove labels equal to ‘ignore’
lovasz_hinge_flat
lovasz_hinge_flat (logits, labels)
Binary Lovasz hinge loss logits: [P] Variable, logits at each prediction (between -and +) labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore
lovasz_hinge
lovasz_hinge (logits, labels, per_image=True, ignore=None)
Binary Lovasz hinge loss logits: [B, H, W] Variable, logits at each pixel (between -and +) labels: [B, H, W] Tensor, binary ground truth masks (0 or 1) per_image: compute the loss per image instead of per batch ignore: void class id
mean
mean (l, ignore_nan=False, empty=0)
nanmean compatible with generators.
isnan
isnan (x)
iou
iou (preds, labels, C, EMPTY=1.0, ignore=None, per_image=False)
Array of IoU for each (non ignored) class
iou_binary
iou_binary (preds, labels, EMPTY=1.0, ignore=None, per_image=True)
IoU for foreground class binary: 1 foreground, 0 background
lovasz_grad
lovasz_grad (gt_sorted)
Computes gradient of the Lovasz extension w.r.t sorted errors See Alg. 1 in paper
LovaszHingeLoss
LovaszHingeLoss (ignore=None)
Lovasz-Hinge loss from https://arxiv.org/abs/1705.08790, with per_image=True
Todo
Binary Lovasz hinge loss logits: [P] Variable, logits at each prediction (between -and +) labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore
LovaszHingeLossFlat
LovaszHingeLossFlat (*args, axis=-1, ignore=None, **kwargs)
Same as LovaszHingeLoss
but flattens input and target
lov_hinge(outp, target)
tensor(1.4331)
lovasz_hinge(outp, target)
tensor(1.4331)
LovaszSigmoidLoss
LovaszSigmoidLoss (ignore=None)
Lovasz-Sigmoid loss from https://arxiv.org/abs/1705.08790, with per_image=False
Todo
probas: [P, C] Variable, logits at each prediction (between -and +) labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore
LovaszSigmoidLossFlat
LovaszSigmoidLossFlat (*args, axis=-1, ignore=None, **kwargs)
Same as LovaszSigmoidLoss
but flattens input and target
= LovaszSigmoidLossFlat() lov_sigmoid
lov_sigmoid(outp, target)
tensor(0.5823)
=[1]) lovasz_softmax(torch.sigmoid(outp), target, classes
tensor(0.5823)
LovaszSoftmaxLoss
LovaszSoftmaxLoss (classes='present', ignore=None)
Lovasz-Sigmoid loss from https://arxiv.org/abs/1705.08790, with per_image=False
LovaszSoftmaxLossFlat
LovaszSoftmaxLossFlat (*args, axis=1, classes='present', ignore=None, **kwargs)
Same as LovaszSigmoidLoss
but flattens input and target
= LovaszSoftmaxLossFlat()
lov_softmax = torch.randn(4,3,128,128)
outp_multi = torch.randint(0, 3, (4,1,128,128))
target_multi lov_softmax(outp_multi, target_multi)
tensor(0.7045)
=1), target_multi) lovasz_softmax(F.softmax(outp_multi, dim
tensor(0.7045)
= LovaszSoftmaxLossFlat(classes=[1,2])
lov_softmax_subset lov_softmax_subset(outp_multi, target_multi)
tensor(0.7039)
=1), target_multi, classes=[1,2]) lovasz_softmax(F.softmax(outp_multi, dim
tensor(0.7039)
FocalDice
FocalDice (axis=1, smooth=1.0, alpha=1.0)
Combines Focal loss with dice loss