Although the detrimental phenomenon of hot-carrier degradation has been known for more than four decades, it remains one of the most significant concerns in transistor reliability. Since during this period of time several generations of MOSFETs have been in production, the characteristic features of HCD, their understanding, and the modeling approaches also reflect these trends. For instance, in the 80s, the device dimensions were reduced rather quickly, accompanied by a slower scaling of the transistor power supply. This tendency led to high electric fields in the metal-oxide-semiconductor (MOS) transistor channel, which accelerated carriers up to energies high enough to directly trigger a Si-H bond-breakage process by a solitary carrier, which was then considered "hot" [2,4,5]. Such a situation required specific measures in order to suppress carrier heating. Among them was the demand that the supply voltage should scale faster than device dimensions [35,36,37,38] in addition to requirements for doping profiles and device geometry, which for instance resulted in lightly doped drain structures [39,30].
In particular, even though in the 0.25um node hot-carrier degradation could be rather dramatic, its importance was expected to reduce drastically for coming nodes [2]. The physical reason behind this expectation was that the source-drain voltage Vds had already been scaled down to 1-1.5V while the threshold energy required for triggering the Si-H bond dissociation process is about 3.0-3.5eV. Therefore, it was expected that the carrier would not be heated up to energies sufficient enough for the Si-H bond-breakage, resulting in a suppression of HCD. Overall, a complete absence of HCD was expected for extremely-scaled devices [24,40,30,41].