Automated deadwood detection from UAV RGB imagery

Scene level results

  • Introduction
  • Workflow
    • Dataset description and generation
    • Comparison of plot characteristics and digitized deadwood
    • Mask R-CNN model training
    • Patch level results
    • Scene level results
    • Result comparison with field data
    • Deriving deadwood statistics for full Evo study area
  • Running models with your own data

On this page

  • 1 Hiidenportti test set
    • 1.1 No overlap, no post-processing
    • 1.2 Half patch overlap and edge filtering
    • 1.3 Overlap, edge filtering and mask merging
  • 2 Full Evo dataset
    • 2.1 No overlap, no post-processing
    • 2.2 Half patch overlap and edge filtering
    • 2.3 Overlap, edge filtering and mask merging

View source

Scene level results

Author

Janne Mäyrä

Published

December 22, 2022

Code
from drone_detector.utils import *
from drone_detector.imports import *
import os
from drone_detector.metrics import *
import warnings
warnings.filterwarnings("ignore")
sys.path.append('..')
from src.postproc_functions import *
from tqdm.auto import tqdm
tqdm.pandas()

As patch-level data and results are not really useful for our purposes, here we run the predictions for larger scenes. Each plot is tiled to 512x512px patches, possibly with 256px overlap and afterwards the predictions are collated and optionally cleaned so that the amount of overlapping predictions is lower.

1 Hiidenportti test set

As Hiidenportti test set is so small, we can run predictions here if needed.

1.1 No overlap, no post-processing

Code
from drone_detector.engines.detectron2.predict import predict_instance_masks
raw_path = Path('../../data/raw/hiidenportti/virtual_plots/buffered_test/images')
test_rasters = [raw_path/f for f in os.listdir(raw_path) if f.endswith('tif')]

Template folder has the following structure:

template_folder
|-predicted_vectors
|-raster_tiles
|-vector_tiles
|-raw_preds

Where raster_tiles and vector_tiles are symbolic links pointing to corresponding data directories, and predicted_vectors and raw_preds are empty folders for predictions.

Code
pred_outpath = Path('../results/hp_unprocessed_new/')
if not os.path.exists(pred_outpath):
    shutil.copytree('../results/template_folder/', pred_outpath, symlinks=True)
Code
for t in test_rasters:
    outfile_name = pred_outpath/f'raw_preds/{str(t).split("/")[-1][:-4]}.geojson'
    predict_instance_masks(path_to_model_files='../models/hiidenportti/mask_rcnn_R_101_FPN_3x/', 
                           path_to_image=str(t),
                           outfile=str(outfile_name),
                           processing_dir='temp',
                           tile_size=512,
                           tile_overlap=0,
                           smooth_preds=False,
                           use_tta=True,
                           coco_set='../../data/processed/hiidenportti/hiidenportti_valid.json',
                           postproc_results=False)
Code
raw_res_path = pred_outpath
truth_shps = sorted([raw_res_path/'vector_tiles'/f for f in os.listdir(raw_res_path/'vector_tiles')])
raw_shps = sorted([raw_res_path/'raw_preds'/f for f in os.listdir(raw_res_path/'raw_preds')])
rasters = sorted([raw_res_path/'raster_tiles'/f for f in os.listdir(raw_res_path/'raster_tiles')])

“Raw” predictions are modified as such: 1. Invalid polygons are fixed to be valid polygons. MultiPolygon masks are replaced with the largest single polygon of the multipoly. 2. Extent is clipped to be same as the corresponding ground truth data 3. Label numbering is adjusted 4. Polygons with area less than 16² pixels are discarded

Code
for p, t in zip(raw_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    temp_pred = gpd.clip(temp_pred, box(*temp_truth.total_bounds))

    temp_pred['geometry'] = temp_pred.apply(lambda row: fix_multipolys(row.geometry) 
                                            if row.geometry.type == 'MultiPolygon' 
                                            else shapely.geometry.Polygon(row.geometry.exterior), axis=1)
    temp_pred['label'] += 1
    temp_pred = temp_pred[temp_pred.geometry.area > 16*0.04**2]
    temp_pred.to_file(raw_res_path/'predicted_vectors'/p.name)
Code
pred_shps = sorted([raw_res_path/'predicted_vectors'/f for f in os.listdir(raw_res_path/'predicted_vectors')])

Collate predictions and annotations so that IoU and such is easy to compute.

Code
truths = None
preds = None

for p, t in zip(pred_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    if truths is None:
        truths = temp_truth
        preds = temp_pred
    else:
        truths = pd.concat((truths, temp_truth))
        preds = pd.concat((preds, temp_pred))

Fix labeling.

Code
preds['layer'] = preds.apply(lambda row: 'groundwood' if row.label == 2 else 'uprightwood', axis=1)

Check the number of predictions. The models have found almost 1500 more deadwood instances at this point.

Code
preds.shape, truths.shape
((3444, 4), (1741, 6))
Code
preds.label.value_counts()
2    2790
1     654
Name: label, dtype: int64
Code
dis_truths = truths.dissolve(by='layer')
dis_preds = preds.dissolve(by='layer')

Check IoU-score.

Code
poly_IoU(dis_truths, dis_preds)
layer
groundwood     0.472870
uprightwood    0.460234
dtype: float64

Run GisCOCOeval, which converts georeferenced vector files to COCO-annotations and runs the metrics.

Code
deadwood_categories = [{'supercategory': 'deadwood', 'id':1, 'name':'uprightwood'},
                 
                       {'supercategory': 'deadwood', 'id':2, 'name':'groundwood'}]

raw_coco_eval = GisCOCOeval(raw_res_path, raw_res_path, 
                            None, None, deadwood_categories)
Code
raw_coco_eval.prepare_data(gt_label_col='layer')
Code
raw_coco_eval.prepare_eval()
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.03s)
creating index...
index created!

As the scenes can contain more than 1000 annotations, set maxDets to larger values than default.

Code
raw_coco_eval.coco_eval.params.maxDets = [1000, 10000]
Code
raw_coco_eval.evaluate()

Evaluating for category uprightwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.81s).
Accumulating evaluation results...
DONE (t=0.01s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.348
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.677
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.325
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.217
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.396
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.466
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.387
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.497
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 1.000

Evaluating for category groundwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=11.59s).
Accumulating evaluation results...
DONE (t=0.02s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.202
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.525
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.104
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.204
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.160
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.342
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.347
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.200
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000

Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=12.39s).
Accumulating evaluation results...
DONE (t=0.03s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.275
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.601
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.214
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.210
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.278
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.404
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.367
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.349
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 1.000

Compared to the patch-level results, the AP50 score is around 0.2 lower than patch-level results. However, edges are not handled in any way in this processing level.

Get the number of false positives (FP), true positives (TP) and false negatives (FN). Object detection has infinite number of true negatives so we are not interested in them.

Code
fp_cols = [f'FP_{np.round(i, 2)}' for i in np.arange(0.5, 1.04, 0.05)]
tp_cols = [f'TP_{np.round(i, 2)}' for i in np.arange(0.5, 1.03, 0.05)]
tp_truths = truths.copy()
tp_truths.rename(columns={'groundwood':'label'}, inplace=True)
truth_sindex = tp_truths.sindex
fp_preds = preds.copy()
pred_sindex = fp_preds.sindex
tp_truths[tp_cols] = tp_truths.progress_apply(lambda row: is_true_positive(row, fp_preds, pred_sindex), 
                                              axis=1, result_type='expand')
fp_preds[fp_cols] = fp_preds.progress_apply(lambda row: is_false_positive(row, tp_truths, truth_sindex,
                                                                            fp_preds, pred_sindex),
                                            axis=1, result_type='expand')
Code
pd.crosstab(fp_preds.layer, fp_preds['FP_0.5'], margins=True)
FP_0.5 FP TP All
layer
groundwood 1837 953 2790
uprightwood 382 272 654
All 2219 1225 3444
Code
pd.crosstab(tp_truths.layer, tp_truths['TP_0.5'], margins=True)
TP_0.5 FN TP All
layer
groundwood 448 953 1401
uprightwood 68 272 340
All 516 1225 1741

From these we can get both precision and recall: \(Precision = \frac{tp}{tp+fp}, Recall = \frac{tp}{tp+fn}\)

Code
print(f'Precision for fallen deadwood with IoU threshold of 0.5 is {(953/2790):.2f}')
print(f'Recall for fallen deadwood with IoU threshold of 0.5 is {(953/1401):.2f}')
Precision for fallen deadwood with IoU threshold of 0.5 is 0.34
Recall for fallen deadwood with IoU threshold of 0.5 is 0.68
Code
print(f'Precision for standing deadwood with IoU threshold of 0.5 is {(272/654):.2f}')
print(f'Recall for standing deadwood with IoU threshold of 0.5 is {(212/340):.2f}')
Precision for standing deadwood with IoU threshold of 0.5 is 0.42
Recall for standing deadwood with IoU threshold of 0.5 is 0.62
Code
print(f'Overall precision with IoU threshold of 0.5 is {(1225/3444):.2f}')
print(f'Overall recall with IoU threshold of 0.5 is {(1225/1741):.2f}')
Overall precision with IoU threshold of 0.5 is 0.36
Overall recall with IoU threshold of 0.5 is 0.70

1.2 Half patch overlap and edge filtering

For this postprocessing method, mosaics are tiled so that the sliding window moves half tile lenght. For example, when moving row-wise, the first bottom-left coordinates are (0,0), and next ones (256,0), (512,0)… and same is done column-wise. We discard all predicted polygons whose centroid point is not within the half-overlap area. For instance, for first tile (bottom left (0,0)), the x-coordinate must be between 128 and 384, for second tile (256,0) between 384 and 640, and likewise for y-coordinates. This method discards almost 75% of all predictions in the scenes as they are either overlapping or cut in half in the patch borders.

The images used for predictions are buffered so that the whole area is covered, considering the discarding process.

Code
pred_outpath = Path('../results/hp_overlap_filter_new/')
if not os.path.exists(pred_outpath):
    shutil.copytree('../results/template_folder/', pred_outpath, symlinks=True)
Code
raw_path = Path('../../data/raw/hiidenportti/virtual_plots/buffered_test/images')
test_rasters = [raw_path/f for f in os.listdir(raw_path) if f.endswith('tif')]

for t in test_rasters:
    outfile_name = pred_outpath/f'raw_preds/{str(t).split("/")[-1][:-4]}.geojson'
    predict_instance_masks(path_to_model_files='../models/hiidenportti/mask_rcnn_R_101_FPN_3x/', 
                           path_to_image=str(t),
                           outfile=str(outfile_name),
                           processing_dir='temp',
                           tile_size=512,
                           tile_overlap=256,
                           smooth_preds=False,
                           use_tta=True,
                           coco_set='../../data/processed/hiidenportti/hiidenportti_valid.json',
                           postproc_results=True)

Modify as previously.

Code
hp_res_path = pred_outpath
truth_shps = sorted([hp_res_path/'vector_tiles'/f for f in os.listdir(hp_res_path/'vector_tiles')])
hp_raw_shps = sorted([hp_res_path/'raw_preds'/f for f in os.listdir(hp_res_path/'raw_preds')])
rasters = sorted([hp_res_path/'raster_tiles'/f for f in os.listdir(hp_res_path/'raster_tiles')])
Code
for p, t in zip(hp_raw_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    temp_pred = gpd.clip(temp_pred, box(*temp_truth.total_bounds))

    temp_pred['geometry'] = temp_pred.apply(lambda row: fix_multipolys(row.geometry) 
                                            if row.geometry.type == 'MultiPolygon' 
                                            else shapely.geometry.Polygon(row.geometry.exterior), axis=1)
    temp_pred['label'] += 1
    temp_pred = temp_pred[temp_pred.geometry.area > 16*0.04**2]
    temp_pred.to_file(hp_res_path/'predicted_vectors'/p.name)
Code
pred_shps = sorted([hp_res_path/'predicted_vectors'/f for f in os.listdir(hp_res_path/'predicted_vectors')])

Collate all predictions into single dataframes

Code
truths = None
preds = None

for p, t in zip(pred_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    if truths is None:
        truths = temp_truth
        preds = temp_pred
    else:
        truths = pd.concat((truths, temp_truth))
        preds = pd.concat((preds, temp_pred))
Code
preds['layer'] = preds.apply(lambda row: 'groundwood' if row.label == 2 else 'uprightwood', axis=1)

The total amount after cleaning is similar than before.

Code
preds.layer.value_counts()
groundwood     2698
uprightwood     619
Name: layer, dtype: int64
Code
dis_truths = truths.dissolve(by='layer')
dis_preds = preds.dissolve(by='layer')

But IoU, especially for standing deadwood increases.

Code
poly_IoU(dis_truths, dis_preds)
layer
groundwood     0.487089
uprightwood    0.503599
dtype: float64
Code
deadwood_categories = [{'supercategory': 'deadwood', 'id':1, 'name':'uprightwood'},
                 
                       {'supercategory': 'deadwood', 'id':2, 'name':'groundwood'}]

hp_coco_eval = GisCOCOeval(hp_res_path, hp_res_path, 
                            None, None, deadwood_categories)
Code
hp_coco_eval.prepare_data(gt_label_col='layer')
Code
hp_coco_eval.prepare_eval()
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.03s)
creating index...
index created!
Code
hp_coco_eval.coco_eval.params.maxDets = [1000, 10000]
Code
hp_coco_eval.evaluate()

Evaluating for category uprightwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.76s).
Accumulating evaluation results...
DONE (t=0.01s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.447
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.780
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.479
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.296
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.505
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.545
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.428
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.591
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 1.000

Evaluating for category groundwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=12.06s).
Accumulating evaluation results...
DONE (t=0.02s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.266
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.652
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.145
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.269
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.211
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.389
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.394
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.280
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000

Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=12.87s).
Accumulating evaluation results...
DONE (t=0.03s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.356
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.716
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.312
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.282
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.358
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.467
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.411
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.436
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 1.000

Overall AP50 increases by around 0.1 with this kind of post-processing.

Code
tp_truths = truths.copy()
tp_truths.rename(columns={'groundwood':'label'}, inplace=True)
truth_sindex = tp_truths.sindex
fp_preds = preds.copy()
pred_sindex = fp_preds.sindex
tp_truths[tp_cols] = tp_truths.progress_apply(lambda row: is_true_positive(row, fp_preds, pred_sindex), 
                                              axis=1, result_type='expand')
fp_preds[fp_cols] = fp_preds.progress_apply(lambda row: is_false_positive(row, tp_truths, truth_sindex,
                                                                            fp_preds, pred_sindex),
                                            axis=1, result_type='expand')
Code
pd.crosstab(fp_preds.layer, fp_preds['FP_0.5'], margins=True)
FP_0.5 FP TP All
layer
groundwood 1636 1062 2698
uprightwood 318 301 619
All 1954 1363 3317
Code
pd.crosstab(tp_truths.layer, tp_truths['TP_0.5'], margins=True)
TP_0.5 FN TP All
layer
groundwood 339 1062 1401
uprightwood 39 301 340
All 378 1363 1741

\(Precision = \frac{tp}{tp+fp}, Recall = \frac{tp}{tp+fn}\)

Code
print(f'Precision for fallen deadwood with IoU threshold of 0.5 is {(1062/2698):.2f}')
print(f'Recall for fallen deadwood with IoU threshold of 0.5 is {(1062/1401):.2f}')
Precision for fallen deadwood with IoU threshold of 0.5 is 0.39
Recall for fallen deadwood with IoU threshold of 0.5 is 0.76
Code
print(f'Precision for standing deadwood with IoU threshold of 0.5 is {(301/619):.2f}')
print(f'Recall for standing deadwood with IoU threshold of 0.5 is {(301/340):.2f}')
Precision for standing deadwood with IoU threshold of 0.5 is 0.49
Recall for standing deadwood with IoU threshold of 0.5 is 0.89
Code
print(f'Overall precision with IoU threshold of 0.5 is {(1363/3317):.2f}')
print(f'Overall recall with IoU threshold of 0.5 is {(1363/1741):.2f}')
Overall precision with IoU threshold of 0.5 is 0.41
Overall recall with IoU threshold of 0.5 is 0.78

1.3 Overlap, edge filtering and mask merging

Mask merging is built on previous predictions. In this step, for each polygon we check whether the ratio between intersection with any other polygon of the same class and the area of the polygon is more than 0.2. If yes, the polygon is merged to the other polygon with which it had intersection-over-area ratio.

Code
merge_outpath = Path('../results/hp_merge_new//')
if not os.path.exists(merge_outpath):
    shutil.copytree('../results/template_folder/', merge_outpath, symlinks=True)

Two iterations of merging is usually enough.

Code
for r in pred_shps:
    gdf_temp = gpd.read_file(r)
    standing = gdf_temp[gdf_temp.label==1].copy()
    fallen = gdf_temp[gdf_temp.label==2].copy()
    standing = merge_polys(standing, 0.2)
    fallen = merge_polys(fallen, 0.2)
    standing = merge_polys(standing, 0.2)
    fallen = merge_polys(fallen, 0.2)
    gdf_merged = pd.concat((standing, fallen))
    gdf_merged.to_file(merge_outpath/'predicted_vectors'/r.name, driver='GeoJSON')
    gdf_merged = None
    gdf_temp = None
Code
merge_outpath
Path('../results/hp_merge_new')
Code
merged_coco_eval = GisCOCOeval(merge_outpath, merge_outpath, None, None, deadwood_categories)
merged_coco_eval.prepare_data(gt_label_col='layer')
merged_coco_eval.prepare_eval()
merged_coco_eval.evaluate()
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.02s)
creating index...
index created!

Evaluating for category uprightwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.68s).
Accumulating evaluation results...
DONE (t=0.01s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.436
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.761
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.465
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.288
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.495
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.399
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.246
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.460
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000

Evaluating for category groundwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=9.05s).
Accumulating evaluation results...
DONE (t=0.01s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.246
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.605
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.130
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.248
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.207
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.151
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.153
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.082
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=9.90s).
Accumulating evaluation results...
DONE (t=0.02s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.341
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.683
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.297
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.268
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.351
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.275
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.200
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.271
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000

This usually worsens the results a bit, but the produced results are better suited for deriving forest charasteristics, as the number of overlapping detected instances decrease significantly.

Code
preds.reset_index(drop=True, inplace=True)
standing = merge_polys(preds[preds.label == 1].copy(), 0.2)
fallen = merge_polys(preds[preds.label == 2].copy(), 0.2)
standing = merge_polys(standing, 0.2)
fallen = merge_polys(fallen, 0.2)
preds_merged = pd.concat((standing, fallen))
619it [00:02, 250.70it/s]
2698it [00:23, 113.43it/s]
574it [00:02, 284.39it/s]
2074it [00:13, 155.47it/s]
Code
preds_merged['layer'] = preds_merged.apply(lambda row: 'groundwood' if row.label == 2 else 'uprightwood', axis=1)
preds_merged.layer.value_counts()
groundwood     2003
uprightwood     564
Name: layer, dtype: int64
Code
tp_truths = truths.copy()
tp_truths.rename(columns={'groundwood':'label'}, inplace=True)
truth_sindex = tp_truths.sindex
fp_preds_merged = preds_merged.copy()
pred_sindex = fp_preds_merged.sindex
tp_truths[tp_cols] = tp_truths.progress_apply(lambda row: is_true_positive(row, fp_preds_merged, pred_sindex), 
                                              axis=1, result_type='expand')
fp_preds_merged[fp_cols] = fp_preds_merged.progress_apply(lambda row: is_false_positive(row, tp_truths, truth_sindex,
                                                                            fp_preds_merged, pred_sindex),
                                            axis=1, result_type='expand')
Code
pd.crosstab(fp_preds_merged.layer, fp_preds_merged['FP_0.5'], margins=True)
FP_0.5 FP TP All
layer
groundwood 1036 967 2003
uprightwood 271 293 564
All 1307 1260 2567
Code
pd.crosstab(tp_truths.layer, tp_truths['TP_0.5'], margins=True)
TP_0.5 FN TP All
layer
groundwood 434 967 1401
uprightwood 47 293 340
All 481 1260 1741

\(Precision = \frac{tp}{tp+fp}, Recall = \frac{tp}{tp+fn}\)

Code
print(f'Precision for fallen deadwood with IoU threshold of 0.5 is {(967/2003):.2f}')
print(f'Recall for fallen deadwood with IoU threshold of 0.5 is {(967/1401):.2f}')
Precision for fallen deadwood with IoU threshold of 0.5 is 0.48
Recall for fallen deadwood with IoU threshold of 0.5 is 0.69
Code
print(f'Precision for standing deadwood with IoU threshold of 0.5 is {(293/564):.2f}')
print(f'Recall for standing deadwood with IoU threshold of 0.5 is {(293/340):.2f}')
Precision for standing deadwood with IoU threshold of 0.5 is 0.52
Recall for standing deadwood with IoU threshold of 0.5 is 0.86
Code
print(f'Overall precision with IoU threshold of 0.5 is {(1260/2567):.2f}')
print(f'Overall recall with IoU threshold of 0.5 is {(1260/1741):.2f}')
Overall precision with IoU threshold of 0.5 is 0.49
Overall recall with IoU threshold of 0.5 is 0.72

As the postprocessing merges data instead of dropping less certain predictions, total area and IoU remain the same as in previous step.

Code
preds_merged.to_file('../results/hiidenportti/merged_all_20220823.geojson')

2 Full Evo dataset

Running predictions for Evo dataset takes so much time that it has been done separately.

2.1 No overlap, no post-processing

Code
spk_raw_res_path = Path('../results/spk_benchmark/r101_nobuf/')
truth_shps = sorted([spk_raw_res_path/'vector_tiles'/f for f in os.listdir(spk_raw_res_path/'vector_tiles')])
spk_raw_shps = sorted([spk_raw_res_path/'raw_preds'/f for f in os.listdir(spk_raw_res_path/'raw_preds')])
rasters = sorted([spk_raw_res_path/'raster_tiles'/f for f in os.listdir(spk_raw_res_path/'raster_tiles')])
Code
for p, t in zip(spk_raw_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    temp_pred = gpd.clip(temp_pred, box(*temp_truth.total_bounds))
    temp_pred['geometry'] = temp_pred.apply(lambda row: fix_multipolys(row.geometry) 
                                            if row.geometry.type == 'MultiPolygon' 
                                            else shapely.geometry.Polygon(row.geometry.exterior), axis=1)
    temp_pred['label'] += 1
    temp_pred = temp_pred[temp_pred.geometry.area > 16*0.0485**2]
    temp_pred.to_file(spk_raw_res_path/'predicted_vectors'/p.name)
Code
pred_shps = sorted([spk_raw_res_path/'predicted_vectors'/f for f in os.listdir(spk_raw_res_path/'predicted_vectors')])
Code
truths = None
preds = None

for p, t in zip(pred_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    if truths is None:
        truths = temp_truth
        preds = temp_pred
    else:
        truths = pd.concat((truths, temp_truth))
        preds = pd.concat((preds, temp_pred))
Code
preds['layer'] = preds.apply(lambda row: 'groundwood' if row.label == 2 else 'uprightwood', axis=1)
Code
preds.shape, truths.shape
((6893, 4), (5334, 4))
Code
preds.layer.value_counts()
groundwood     5481
uprightwood    1412
Name: layer, dtype: int64
Code
truths.rename(columns={'label':'layer'}, inplace=True)
Code
dis_truths = truths.dissolve(by='layer')
dis_preds = preds.dissolve(by='layer')

IoU for standing deadwood is good already for this preprocessing level.

Code
poly_IoU(dis_truths, dis_preds)
layer
groundwood     0.467202
uprightwood    0.605853
dtype: float64
Code
deadwood_categories = [{'supercategory': 'deadwood', 'id':1, 'name':'uprightwood'},
                 
                       {'supercategory': 'deadwood', 'id':2, 'name':'groundwood'}]

spk_raw_coco_eval = GisCOCOeval(spk_raw_res_path, spk_raw_res_path, 
                           None, None, deadwood_categories)
spk_raw_coco_eval.prepare_data(gt_label_col='label')
spk_raw_coco_eval.prepare_eval()
spk_raw_coco_eval.coco_eval.params.maxDets = [1000, 10000]
spk_raw_coco_eval.evaluate()
loading annotations into memory...
Done (t=0.19s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.06s)
creating index...
index created!

Evaluating for category uprightwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.81s).
Accumulating evaluation results...
DONE (t=0.02s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.249
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.518
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.224
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.112
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.326
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.279
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.353
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.233
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.428
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.436

Evaluating for category groundwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=10.17s).
Accumulating evaluation results...
DONE (t=0.05s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.129
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.382
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.049
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.128
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.174
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.233
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.234
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.215
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000

Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=10.88s).
Accumulating evaluation results...
DONE (t=0.07s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.189
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.450
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.136
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.120
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.250
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.279
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.293
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.233
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.321
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.436
Code
tp_truths = truths.copy()
tp_truths['label'] = tp_truths.layer.apply(lambda row: 1 if row=='uprightwood' else 2)
truth_sindex = tp_truths.sindex
fp_preds = preds.copy()
pred_sindex = fp_preds.sindex
tp_truths[tp_cols] = tp_truths.progress_apply(lambda row: is_true_positive(row, fp_preds, pred_sindex), 
                                              axis=1, result_type='expand')
fp_preds[fp_cols] = fp_preds.progress_apply(lambda row: is_false_positive(row, tp_truths, truth_sindex,
                                                                            fp_preds, pred_sindex),
                                            axis=1, result_type='expand')
Code
pd.crosstab(fp_preds.layer, fp_preds['FP_0.5'], margins=True)
FP_0.5 FP TP All
layer
groundwood 3426 2055 5481
uprightwood 529 883 1412
All 3955 2938 6893
Code
pd.crosstab(tp_truths.layer, tp_truths['TP_0.5'], margins=True)
TP_0.5 FN TP All
layer
groundwood 1860 2055 3915
uprightwood 536 883 1419
All 2396 2938 5334

\(Precision = \frac{tp}{tp+fp}, Recall = \frac{tp}{tp+fn}\)

Code
print(f'Precision for fallen deadwood with IoU threshold of 0.5 is {(2055/5481):.2f}')
print(f'Recall for fallen deadwood with IoU threshold of 0.5 is {(2055/3915):.2f}')
Precision for fallen deadwood with IoU threshold of 0.5 is 0.37
Recall for fallen deadwood with IoU threshold of 0.5 is 0.52
Code
print(f'Precision for standing deadwood with IoU threshold of 0.5 is {(883/1412):.2f}')
print(f'Recall for standing deadwood with IoU threshold of 0.5 is {(883/1419):.2f}')
Precision for standing deadwood with IoU threshold of 0.5 is 0.63
Recall for standing deadwood with IoU threshold of 0.5 is 0.62
Code
print(f'Overall precision with IoU threshold of 0.5 is {(2938/6893):.2f}')
print(f'Overall recall with IoU threshold of 0.5 is {(2938/5334):.2f}')
Overall precision with IoU threshold of 0.5 is 0.43
Overall recall with IoU threshold of 0.5 is 0.55

2.2 Half patch overlap and edge filtering

Code
spk_res_path = Path('../results/spk_benchmark/r101/')
truth_shps = sorted([spk_res_path/'vector_tiles'/f for f in os.listdir(spk_res_path/'vector_tiles')])
spk_buf_raw_shps = sorted([spk_res_path/'raw_preds'/f for f in os.listdir(spk_res_path/'raw_preds')])
rasters = sorted([spk_res_path/'raster_tiles'/f for f in os.listdir(spk_res_path/'raster_tiles')])
Code
for p, t in zip(spk_buf_raw_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    temp_pred = gpd.clip(temp_pred, box(*temp_truth.total_bounds))
    temp_pred['geometry'] = temp_pred.apply(lambda row: fix_multipolys(row.geometry) 
                                            if row.geometry.type == 'MultiPolygon' 
                                            else shapely.geometry.Polygon(row.geometry.exterior), axis=1)
    temp_pred['label'] += 1
    temp_pred = temp_pred[temp_pred.geometry.area > 16*0.0485**2]
    temp_pred.to_file(spk_res_path/'predicted_vectors'/p.name)
Code
pred_shps = sorted([spk_res_path/'predicted_vectors'/f for f in os.listdir(spk_res_path/'predicted_vectors')])
Code
truths = None
preds = None

for p, t in zip(pred_shps, truth_shps):
    temp_pred = gpd.read_file(p)
    temp_truth = gpd.read_file(t)
    if truths is None:
        truths = temp_truth
        preds = temp_pred
    else:
        truths = pd.concat((truths, temp_truth))
        preds = pd.concat((preds, temp_pred))
Code
preds['layer'] = preds.apply(lambda row: 'groundwood' if row.label == 2 else 'uprightwood', axis=1)
Code
preds.shape, truths.shape
((6511, 4), (5334, 4))
Code
preds.layer.value_counts()
groundwood     5204
uprightwood    1307
Name: layer, dtype: int64
Code
truths.rename(columns={'label':'layer'}, inplace=True)
Code
dis_truths = truths.dissolve(by='layer')
dis_preds = preds.dissolve(by='layer')
Code
poly_IoU(dis_truths, dis_preds)
layer
groundwood     0.469782
uprightwood    0.618346
dtype: float64
Code
deadwood_categories = [{'supercategory': 'deadwood', 'id':1, 'name':'uprightwood'},
                 
                       {'supercategory': 'deadwood', 'id':2, 'name':'groundwood'}]

spk_coco_eval = GisCOCOeval(spk_res_path, spk_res_path, 
                           None, None, deadwood_categories)
spk_coco_eval.prepare_data(gt_label_col='label')
spk_coco_eval.prepare_eval()
spk_coco_eval.coco_eval.params.maxDets = [1000, 10000]
spk_coco_eval.evaluate()
loading annotations into memory...
Done (t=0.19s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.05s)
creating index...
index created!

Evaluating for category uprightwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.73s).
Accumulating evaluation results...
DONE (t=0.02s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.321
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.591
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.331
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.145
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.422
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.362
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.398
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.237
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.495
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.657

Evaluating for category groundwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=9.72s).
Accumulating evaluation results...
DONE (t=0.05s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.158
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.438
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.063
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.155
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.233
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.252
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.251
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.284
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000

Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=10.63s).
Accumulating evaluation results...
DONE (t=0.07s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=10000 ] = 0.239
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=10000 ] = 0.514
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=10000 ] = 0.197
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.150
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.327
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.362
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.325
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.244
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.390
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.657

The overall AP50 score is close to the patch-level results here.

Code
tp_truths = truths.copy()
tp_truths['label'] = tp_truths.layer.apply(lambda row: 1 if row=='uprightwood' else 2)
truth_sindex = tp_truths.sindex
fp_preds = preds.copy()
pred_sindex = fp_preds.sindex
tp_truths[tp_cols] = tp_truths.progress_apply(lambda row: is_true_positive(row, fp_preds, pred_sindex), 
                                              axis=1, result_type='expand')
fp_preds[fp_cols] = fp_preds.progress_apply(lambda row: is_false_positive(row, tp_truths, truth_sindex,
                                                                            fp_preds, pred_sindex),
                                            axis=1, result_type='expand')
Code
pd.crosstab(fp_preds.layer, fp_preds['FP_0.5'], margins=True)
FP_0.5 FP TP All
layer
groundwood 3034 2170 5204
uprightwood 378 929 1307
All 3412 3099 6511
Code
pd.crosstab(tp_truths.layer, tp_truths['TP_0.5'], margins=True)
TP_0.5 FN TP All
layer
groundwood 1745 2170 3915
uprightwood 489 930 1419
All 2234 3100 5334

\(Precision = \frac{tp}{tp+fp}, Recall = \frac{tp}{tp+fn}\)

Code
print(f'Precision for fallen deadwood with IoU threshold of 0.5 is {(2170/5204):.2f}')
print(f'Recall for fallen deadwood with IoU threshold of 0.5 is {(2170/3915):.2f}')
Precision for fallen deadwood with IoU threshold of 0.5 is 0.42
Recall for fallen deadwood with IoU threshold of 0.5 is 0.55
Code
print(f'Precision for standing deadwood with IoU threshold of 0.5 is {(929/1307):.2f}')
print(f'Recall for standing deadwood with IoU threshold of 0.5 is {(930/1419):.2f}')
Precision for standing deadwood with IoU threshold of 0.5 is 0.71
Recall for standing deadwood with IoU threshold of 0.5 is 0.66
Code
print(f'Overall precision with IoU threshold of 0.5 is {(3099/6511):.2f}')
print(f'Overall recall with IoU threshold of 0.5 is {(3100/5334):.2f}')
Overall precision with IoU threshold of 0.5 is 0.48
Overall recall with IoU threshold of 0.5 is 0.58

2.3 Overlap, edge filtering and mask merging

Code
merge_outpath = Path('../results/spk_benchmark/r101_merge/')
if not os.path.exists(merge_outpath):
    shutil.copytree('../results/spk_template/', merge_outpath, symlinks=True)

Two iterations of merging is usually enough.

Code
for r in pred_shps:
    gdf_temp = gpd.read_file(r)
    standing = gdf_temp[gdf_temp.label==1].copy()
    fallen = gdf_temp[gdf_temp.label==2].copy()
    if len(standing) > 0:
        standing = merge_polys(standing, 0.2)
        standing = merge_polys(standing, 0.2)
    if len(fallen) > 0:
        fallen = merge_polys(fallen, 0.2)
        fallen = merge_polys(fallen, 0.2)
    gdf_merged = pd.concat((standing, fallen))
    gdf_merged.to_file(merge_outpath/'predicted_vectors'/r.name, driver='GeoJSON')
    gdf_merged = None
    gdf_temp = None
Code
merged_coco_eval = GisCOCOeval(merge_outpath, merge_outpath, None, None, deadwood_categories)
merged_coco_eval.prepare_data(gt_label_col='label')
merged_coco_eval.prepare_eval()
merged_coco_eval.evaluate()
loading annotations into memory...
Done (t=0.20s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.31s)
creating index...
index created!

Evaluating for category uprightwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.68s).
Accumulating evaluation results...
DONE (t=0.02s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.311
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.572
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.321
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.142
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.408
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.420
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.378
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.228
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.469
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.636

Evaluating for category groundwood
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=7.82s).
Accumulating evaluation results...
DONE (t=0.04s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.160
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.450
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.064
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.157
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.258
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.205
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.203
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.275
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=8.47s).
Accumulating evaluation results...
DONE (t=0.06s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.236
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.511
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.192
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.149
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.333
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.420
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.292
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.215
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.372
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.636
Code
preds.reset_index(drop=True, inplace=True)
standing = merge_polys(preds[preds.label == 1].copy(), 0.2)
fallen = merge_polys(preds[preds.label == 2].copy(), 0.2)
standing = merge_polys(standing, 0.2)
fallen = merge_polys(fallen, 0.2)
preds_merged = pd.concat((standing, fallen))
Code
preds_merged['layer'] = preds_merged.apply(lambda row: 'groundwood' if row.label == 2 else 'uprightwood', axis=1)
preds_merged.layer.value_counts()
groundwood     4130
uprightwood    1190
Name: layer, dtype: int64
Code
tp_truths = truths.copy()
tp_truths['label'] = tp_truths.layer.apply(lambda row: 1 if row=='uprightwood' else 2)
truth_sindex = tp_truths.sindex
fp_preds_merged = preds_merged.copy()
pred_sindex = fp_preds_merged.sindex
tp_truths[tp_cols] = tp_truths.progress_apply(lambda row: is_true_positive(row, fp_preds_merged, pred_sindex), 
                                              axis=1, result_type='expand')
fp_preds_merged[fp_cols] = fp_preds_merged.progress_apply(lambda row: is_false_positive(row, tp_truths, truth_sindex,
                                                                            fp_preds_merged, pred_sindex),
                                            axis=1, result_type='expand')
Code
pd.crosstab(fp_preds_merged.layer, fp_preds_merged['FP_0.5'], margins=True)
FP_0.5 FP TP All
layer
groundwood 2014 2116 4130
uprightwood 306 884 1190
All 2320 3000 5320
Code
pd.crosstab(tp_truths.layer, tp_truths['TP_0.5'], margins=True)
TP_0.5 FN TP All
layer
groundwood 1800 2115 3915
uprightwood 534 885 1419
All 2334 3000 5334

\(Precision = \frac{tp}{tp+fp}, Recall = \frac{tp}{tp+fn}\)

Code
print(f'Precision for fallen deadwood with IoU threshold of 0.5 is {(2116/4130):.2f}')
print(f'Recall for fallen deadwood with IoU threshold of 0.5 is {(2115/3915):.2f}')
Precision for fallen deadwood with IoU threshold of 0.5 is 0.51
Recall for fallen deadwood with IoU threshold of 0.5 is 0.54
Code
print(f'Precision for standing deadwood with IoU threshold of 0.5 is {(884/1190):.2f}')
print(f'Recall for standing deadwood with IoU threshold of 0.5 is {(885/1419):.2f}')
Precision for standing deadwood with IoU threshold of 0.5 is 0.74
Recall for standing deadwood with IoU threshold of 0.5 is 0.62
Code
print(f'Overall precision with IoU threshold of 0.5 is {(3000/5320):.2f}')
print(f'Overall recall with IoU threshold of 0.5 is {(3000/5334):.2f}')
Overall precision with IoU threshold of 0.5 is 0.56
Overall recall with IoU threshold of 0.5 is 0.56
Patch level results
Result comparison with field data