Tag Archives: Generalized Likelihood Ratio

On quickest change detection when the process is modelled with HMMs

Cheng-Der Fuh; Yajun Mei, Quickest Change Detection and Kullback-Leibler Divergence for Two-State Hidden Markov Models, in Signal Processing, IEEE Transactions on , vol.63, no.18, pp.4866-4878, Sept.15, 2015 DOI: 10.1109/TSP.2015.2447506

In this paper, the quickest change detection problem is studied in two-state hidden Markov models (HMM), where the vector parameter θ of the HMM changes from θ0 to θ1 at some unknown time, and one wants to detect the true change as quickly as possible while controlling the false alarm rate. It turns out that the generalized likelihood ratio (GLR) scheme, while theoretically straightforward, is generally computationally infeasible for the HMM. To develop efficient but computationally simple schemes for the HMM, we first discuss a subtlety in the recursive form of the generalized likelihood ratio (GLR) scheme for the HMM. Then we show that the recursive CUSUM scheme proposed in Fuh (Ann. Statist., 2003) can be regarded as a quasi-GLR scheme for pseudo post-change hypotheses with certain dependence structure between pre- and postchange observations. Next, we extend the quasi-GLR idea to propose recursive score schemes in the scenario when the postchange parameter θ1 of the HMM involves a real-valued nuisance parameter. Finally, the Kullback-Leibler (KL) divergence plays an essential role in the quickest change detection problem and many other fields, however it is rather challenging to numerically compute it in HMMs. Here we develop a non-Monte Carlo method that computes the KL divergence of two-state HMMs via the underlying invariant probability measure, which is characterized by the Fredholm integral equation. Numerical study demonstrates an unusual property of the KL divergence for HMM that implies the severe effects of misspecifying the postchange parameter for the HMM.