Automated deadwood detection from UAV RGB imagery

Running models with your own data

  • Introduction
  • Workflow
    • Dataset description and generation
    • Comparison of plot characteristics and digitized deadwood
    • Mask R-CNN model training
    • Patch level results
    • Scene level results
    • Result comparison with field data
    • Deriving deadwood statistics for full Evo study area
  • Running models with your own data

Running models with your own data

Author

Janne Mäyrä

Published

December 22, 2022

All model configurations and weights here are fine-tuned from models available in Detectron2 Model Zoo.

The models are trained on 512x512px RGB image patches with spatial resolution between 3.9cm and 4.3 cm and with hand-annotated deadwood data based on visual inspection. The location of training dataset is in the vicinity of Hiidenportti national, Sotkamo, Finland, and the images were acquired during leaf-on season, on 16. and 17.7.2019. Most likely the models are most suitable to use with imagery from leaf-on season and with similar ground sampling distance.

1 Available models and results

Model configs and weights are available here. Note that configs for WEIGHTS and OUTPUT_DIR need to be changed according to your needs. These models are trained only with Hiidenportti dataset. Example app running R101 without TTA for image patches can be found here

Patch-level data are non-overlapping 512x512 pixel tiles extracted from larger virtual plots. The results presented here are the run with test-time augmentation.

Scene-level data are the full virtual plots extracted from the full images. For Hiidenportti, the virtual plot sizes vary between 2560x2560px and 8192x4864px. These patches contain also non-annotated buffer areas in order to extract the complete annotated area. For Sudenpesänkangas, all 71 scenes are 100x100 meters (2063x2062) pixels, and during inference they are extracted from the full mosaic with enough buffer to cover the full area. The results presented here are run for 512x512 pixel tiles with 256 px overlap, with both edge filtering and mask merging described in the workflow.

1.1 Hiidenportti test set

Hiidenportti test set contains 241 non-overlapping 512x512 pixel image patches, extracted from 5 scenes that cover 11 circular field plots.

Model Patch AP50 Patch AP Patch AP-groundwood Patch AP-uprightwood Scene AP50 Scene AP Scene AP-groundwood Scene AP-uprightwood
mask_rcnn_R_50_FPN_3x 0.654 0.339 0.316 0.363 0.640 0.315 0.235 0.396
mask_rcnn_R_101_FPN_3x 0.704 0.366 0.326 0.406 0.683 0.341 0.246 0.436
mask_rcnn_X_101_32x8d_FPN_3x 0.679 0.355 0.333 0.377 0.661 0.333 0.255 0.412
cascade_mask_rcnn_R_50_FPN_3x 0.652 0.345 0.306 0.384 0.623 0.317 0.223 0.411

1.2 Sudenpesänkangas dataset

Sudenpesänkangas dataset contains 798 on-overlapping 512x512 pixel image patches, extracted from 71 scenes.

Model Patch AP50 Patch AP Patch AP-groundwood Patch AP-uprightwood Scene AP50 Scene AP Scene AP-groundwood Scene AP-uprightwood
mask_rcnn_R_50_FPN_3x 0.486 0.237 0.175 0.299 0.474 0.221 0.152 0.290
mask_rcnn_R_101_FPN_3x 0.519 0.252 0.183 0.321 0.511 0.236 0.160 0.311
mask_rcnn_X_101_32x8d_FPN_3x 0.502 0.245 0.182 0.307 0.494 0.232 0.159 0.305
cascade_mask_rcnn_R_50_FPN_3x 0.497 0.248 0.172 0.323 0.473 0.225 0.148 0.302

2 Running the models

2.1 Running models for image patches

For individual image patches, the models are fairly straightforward to run.

from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.data import build_detection_test_loader
import cv2

cfg = get_cfg()
cfg.merge_from_file(<path_to_model_config>)
cfg.OUTPUT_DIR = '<path_to_output>'
cfg.MODEL.WEIGHTS = '<path_to_weights>'
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # score threshold for detections
predictor = DefaultPredictor(cfg)

img = cv2.imread('<path_to_image_patch>')
outputs = predictor(image)

More examples are shown on Patch level results.

2.2 Running models for larger scenes

Running on larger scenes requires the following steps:

  1. Tiling the scenes into smaller image patches, optionally with overlap
  2. Running the model on these smaller patches
  3. Gathering the predictions into a single GIS data file
  4. Optionally post-processing the results

drone_detector package has helpers for this:

from drone_detector.engines.detectron2.predict import predict_instance_masks

predict_instance_masks(path_to_model_config='<path_to_model_config>', # model config file
                       path_to_image='<path_to_image>', # which image to process
                       outfile='<name_for_predictions>.geojson', # where to save the results
                       processing_dir='temp', # directory for temporary files, deleted afterwards. Default: temp
                       tile_size=512, # image patch size in pixels, square patches. Default: 400
                       tile_overlap=256, # overlap between tiles. Default: 100
                       smooth_preds=False, # not yet implemented, at some points runs dilation+erosion to smooth polygons. Default: False
                       coco_set='<path_to_coco>', # the coco set the model was trained on to infer the class names. If empty, defaults to dummy categories. Default: None
                       postproc_results=True # whether to discard masks in the edge regions of patches Default: False
                      )

Also, after installing the package, predict_instance_masks_detectron2 can be used as CLI command with identical syntax.

When provided models default to dummy classes, 1 is standing deadwood and 2 is fallen deadwood.

More examples are shown on Scene level results.

Deriving deadwood statistics for full Evo study area