The usual definition of the delay time of a stage in a digital circuit is the time difference between the 50% points of the input and output waveforms [5,87]. This definition should be used with care because the result depends on the input waveform and on the output load and may in some cases be even negative. Therefore, the input waveforms should be chosen similar to those expected in the actual circuit operation.
This is done with a so-called ring oscillator, i.e.,
a series connection of an odd number n of stages with
the output of the last stage fed back to the input of the first stage
as shown in Fig. 3.1.
This circuit will oscillate at a frequency
so that the
stage delay can be determined as
A simple way to obtain the delay time by simulation is
to model the active devices as controlled current
sources with one lumped load capacitance at the output.
In this case a delay element can be used to complete a ring oscillator
as shown in Fig. 3.2
which can be analyzed by simple
transient simulation
(an implementation of this method
used a precursor of the model developed in
Section 4.3 [62]).
The delay time is then