Custom losses for segmentation and object detection

Lovász-losses

From https://github.com/bermanmaxim/LovaszSoftmax


source

xloss

 xloss (logits, labels, ignore=None)

Cross entropy loss


source

flatten_probas

 flatten_probas (probas, labels, ignore=None)

Flattens predictions in the batch


source

lovasz_softmax_flat

 lovasz_softmax_flat (probas, labels, classes='present')

Multi-class Lovasz-Softmax loss probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1) labels: [P] Tensor, ground truth labels (between 0 and C - 1) classes: ‘all’ for all, ‘present’ for classes present in labels, or a list of classes to average.


source

lovasz_softmax

 lovasz_softmax (probas, labels, classes='present', per_image=False,
                 ignore=None)

Multi-class Lovasz-Softmax loss probas: [B, C, H, W] Variable, class probabilities at each prediction (between 0 and 1). Interpreted as binary (sigmoid) output with outputs of size [B, H, W]. labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1) classes: ‘all’ for all, ‘present’ for classes present in labels, or a list of classes to average. per_image: compute the loss per image instead of per batch ignore: void class labels


source

flatten_binary_scores

 flatten_binary_scores (scores, labels, ignore=None)

Flattens predictions in the batch (binary case) Remove labels equal to ‘ignore’


source

lovasz_hinge_flat

 lovasz_hinge_flat (logits, labels)

Binary Lovasz hinge loss logits: [P] Variable, logits at each prediction (between -and +) labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore


source

lovasz_hinge

 lovasz_hinge (logits, labels, per_image=True, ignore=None)

Binary Lovasz hinge loss logits: [B, H, W] Variable, logits at each pixel (between -and +) labels: [B, H, W] Tensor, binary ground truth masks (0 or 1) per_image: compute the loss per image instead of per batch ignore: void class id


source

mean

 mean (l, ignore_nan=False, empty=0)

nanmean compatible with generators.


source

isnan

 isnan (x)

source

iou

 iou (preds, labels, C, EMPTY=1.0, ignore=None, per_image=False)

Array of IoU for each (non ignored) class


source

iou_binary

 iou_binary (preds, labels, EMPTY=1.0, ignore=None, per_image=True)

IoU for foreground class binary: 1 foreground, 0 background


source

lovasz_grad

 lovasz_grad (gt_sorted)

Computes gradient of the Lovasz extension w.r.t sorted errors See Alg. 1 in paper


source

LovaszHingeLoss

 LovaszHingeLoss (ignore=None)

Lovasz-Hinge loss from https://arxiv.org/abs/1705.08790, with per_image=True

Todo

Binary Lovasz hinge loss logits: [P] Variable, logits at each prediction (between -and +) labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore


source

LovaszHingeLossFlat

 LovaszHingeLossFlat (*args, axis=-1, ignore=None, **kwargs)

Same as LovaszHingeLoss but flattens input and target

lov_hinge = LovaszHingeLossFlat()
outp = torch.randn(4,1,128,128)
target = torch.randint(0, 2, (4,1,128,128))
lov_hinge(outp, target)
tensor(1.4331)
lovasz_hinge(outp, target)
tensor(1.4331)

source

LovaszSigmoidLoss

 LovaszSigmoidLoss (ignore=None)

Lovasz-Sigmoid loss from https://arxiv.org/abs/1705.08790, with per_image=False

Todo

probas: [P, C] Variable, logits at each prediction (between -and +) labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore


source

LovaszSigmoidLossFlat

 LovaszSigmoidLossFlat (*args, axis=-1, ignore=None, **kwargs)

Same as LovaszSigmoidLoss but flattens input and target

lov_sigmoid = LovaszSigmoidLossFlat()
lov_sigmoid(outp, target)
tensor(0.5823)
lovasz_softmax(torch.sigmoid(outp), target, classes=[1])
tensor(0.5823)

source

LovaszSoftmaxLoss

 LovaszSoftmaxLoss (classes='present', ignore=None)

Lovasz-Sigmoid loss from https://arxiv.org/abs/1705.08790, with per_image=False


source

LovaszSoftmaxLossFlat

 LovaszSoftmaxLossFlat (*args, axis=1, classes='present', ignore=None,
                        **kwargs)

Same as LovaszSigmoidLoss but flattens input and target

lov_softmax = LovaszSoftmaxLossFlat()
outp_multi = torch.randn(4,3,128,128)
target_multi = torch.randint(0, 3, (4,1,128,128))
lov_softmax(outp_multi, target_multi)
tensor(0.7045)
lovasz_softmax(F.softmax(outp_multi, dim=1), target_multi)
tensor(0.7045)
lov_softmax_subset = LovaszSoftmaxLossFlat(classes=[1,2])
lov_softmax_subset(outp_multi, target_multi)
tensor(0.7039)
lovasz_softmax(F.softmax(outp_multi, dim=1), target_multi, classes=[1,2])
tensor(0.7039)

source

FocalDice

 FocalDice (axis=1, smooth=1.0, alpha=1.0)

Combines Focal loss with dice loss