Quasi-synthetic data generation for camouflaged object detection at edge

Date

2023-06-13

Department

Program

Citation of Original Publication

Rohan Putatunda, Md Azim Khan, Aryya Gangopadhyay, Jianwu Wang, Robert F. Erbacher, "Quasi-synthetic data generation for camouflaged object detection at edge," Proc. SPIE 12529, Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications, 1252916 (13 June 2023); doi: 10.1117/12.2678034

Rights

This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain Mark 1.0

Subjects

Abstract

Detecting camouflaged objects is crucial in various applications such as military surveillance, wildlife conservation, and in search and rescue operations. However, the limited availability of camouflaged object data poses a significant challenge in developing accurate detection models. This paper proposes a quasi-synthetic data generation by image compositing combined with attention-based deep learning-based harmonization methodology to generate feature-enriched realistic images for camouflaged objects under varying scenarios. In our work, we developed a diverse set of images to simulate different environmental conditions, including lighting, shadows, fog, dust, and snow, to test our proposed methodology. The intention of generating such photo-realistic images is to increase the robustness of the model with the additional benefit of data augmentation for training our camouflaged object detection model(COD). Furthermore, we evaluate our approach using state-of-the-art object detection models and demonstrate that training with our quasi-synthetic images can significantly improve the detection accuracy of camouflaged objects under varying conditions. Additionally, to test the real operational performance of the developed models, we deployed the models on resource-constrained edge devices for real-time object detection to validate the performance of the trained model on quasi-synthetic data compared to the synthetic data generated by conventional neural style transfer architecture.