A Dispersion Measure Based Ordinal Ranking Rule For Determining The Top-Performing Gaussian Mixture Models For Language Recognition
MetadataShow full item record
Type of WorkText
ProgramDoctor of Engineering
RightsThis item is made available by Morgan State University for personal, educational, and research purposes in accordance with Title 17 of the U.S. Copyright Law. Other uses may require permission from the copyright owner.
Gaussian mixture models (GMMs) with shifted delta ceptra (SDC) features are known to provide high-performance language recognition. The performance of the models is typically assessed using measurements derived from detection estimation tradeoff (DET) curves. The DET curves require the calculation of recognition scores, which can be costly when hundreds of models have to be evaluated. This dissertation presents a new method for finding the top-performing Gaussian mixture models for language recognition. This new method calculates dispersion measures for models and orders the models from best-performing to worst-performing using them. We use multiple dispersion measurements to produce multiple rankings of the models, which we combine to produce a compromise ranking. We will show that this compromise ranking is similar to ranking of the models obtained by performance measures; therefore, it can be used to identify the top-performing models. This new method reduces the complexities of model testing (time, costly data), since researchers can determine the top-performing models without using traditional testing methods that require the calculation of recognition scores and performance measures for the entire population of models. We demonstrate the ability of our new method to find the top-performing models for different data sets and performance measures. We also compare the performance of this method to existing ranking rules: Kohler, Arrow & Raynaud, Borda, and Copeland.