This chapter deals with the extraction of the characteristic time-constants from the stochastic capture and emission events of RTN signals. Section 6.1 will be dedicated to a short description of the basics of HMMs and their relevance for RTN producing single-charge defects. The following sections introduce the most basic case, a simple two-state defect (Section 6.2) followed by more complex cases, namely defects with multiple states (Section 6.3) and systems composed of several arbitrarily shaped defects (Section 6.4). In Section 6.6 different histogram-based methods will be discussed, which allow extracting the time constants of certain defects from their stochastic capture and emission events. In the last part of this chapter, a method to extract the time constants of multiple defects with an arbitrary number of states, an algorithm to train a certain HMM to a set of observations, the Baum-Welch Algorithm will be introduced. After discussing the basics of the Baum-Welch algorithm, an implementation of a HMM library (see Appendix A) will be tested for its robustness against data sampled from a known system of defects.
Markov processes are widely used to describe stochastic transitions between two or more abstract states across many fields of science (physics, chemistry, speech recognition, robotics, etc.) [110, 153, 154].
In the real world, statistical processes produce observable signals which can be measured by some kind of device. In the case of charge transfer reactions in MIS-HEMTs, the charge cannot be measured directly, but only indirectly due to its electrostatic influence on the . This fact potentially introduces noise in the measurements, which depends on the device itself, the measurement equipment and other systematic errors like the mapping from the drain or gate current to [89].
Throughout the next sections, the following notation is used:
the length of the observation sequence
the number of states in the model
the number of observable symbols
the set of possible states of the Markov Model
the sequence of states from
the set of possible observations
the sequence of observations from
the state transition probability matrix
the observation probability matrix
the initial state probabilities
The working principle can be seen in Figure 6.1. The grey region denotes the inner state sequence of the Markov Model, which can be one of the states for each item in the observed sequence . Note that the each inner state can only be identified by its corresponding observation .
As a subset of all stochastic processes, Markov processes can be described as a series of stochastic events, where each event from a discrete state space occurs at a certain time . In general, the set of events is described by:
Each of the events is determined by its own CDF,
The CDF of the whole set can be found by writing down the joint CDF for all events:
To actually construct the CDF for a given series of events, conditional probabilities are used to express the probability of the next observation, given a certain history of observations. In general, this can be a very complex task, since the conditional probability depends on all past observations. At that point, the so called Markov property helps to simplify the problem. It states, that for a Markov process, the conditional probability to enter the next state only depends the current state [153, 154]. With other words, Markov processes have no memory and thus the probability to reach a certain state at time only depends on the current state:
In the context of defect capture and emission events we look at continuous-time discrete-space Markov processes, also called Markov chains, which will be used in the following section to calculate the PDF of a simple two-state defect.
The state transition probability matrix is of size . It contains the conditional probabilities to go from state to state and each of the rows sum to one because the probability of being in one of the states is one (i.e., is row stochastic). Note that in this case the braces only mark the instants in time of the state sequence as the transition probabilities of Markov chains are time independent.
The observation probability matrix is also row stochastic and time independent. It holds the probabilities to observe the symbol given a certain state . The size is , as the number of possible observations not necessarily reflects the number of inner states. One example would be thermal transitions of a defect without charge transfer.
The HMM is fully defined by , and , and is denoted by .