Unsupervised approach to color video thresholding
Files
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Program
Citation of Original Publication
Du, Eliza Yingzi, Chein-I. Chang, and Paul David Thouin. “Unsupervised Approach to Color Video Thresholding.” Optical Engineering 43, no. 2 (February 2004): 282–89. https://doi.org/10.1117/1.1637364.
Rights
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain
Public Domain
Subjects
Abstract
Thresholding of video images is a great challenge because of their low spatial resolution and complex background. We investigate the issue of thresholding these images by reducing the number of colors to improve automated text detection and recognition. We develop an unsupervised approach to video images, which can be considered as an RGB color thresholding method. It applies a gray-level thresholding method to a video image in the (R, G, B) color space to produce a single threshold value for each domain. The three (R, G, B)-generated values will be subsequently processed by an effective unsupervised clustering algorithm that is based on a between-class/within-class criterion suggested by Otsu’s method. Since thresholding methods designed for document images may not work effectively for video images in many applications, our proposed RGB color thresholding method has shown to be particularly effective in improvement on text detection and recognition, because it can reduce the background complexity while retaining the important text character pixels. Experiments also show that thresholding video images is far more difficult than thresholding document images, and the RGB color thresholding presented performs significantly better than simple histogram-based methods, which generally do not produce satisfactory results.
