Fall 2018

ECE 110

Course Notes

Learn It!



An analog signal exists throughout a continuous interval of time and/or takes on a continuous range of values. A sinusoidal signal (also called a pure tone in acoustics) has both of these properties.

Figure 1

Analog signal
Fig. 1: Analog signal. This signal $v(t)=\cos(2\pi ft)$ could be a perfect analog recording of a pure tone of frequency $f$ Hz. If $f=440 \text{ Hz}$, this tone is the musical note $A$ above middle $C$, to which orchestras often tune their instruments. The period $T=1/f$ is the duration of one full oscillation.

In reality, electrical recordings suffer from noise that unavoidably degrades the signal. The more a recording is transferred from one analog format to another, the more it loses fidelity to the original.

Figure 2

Noisy analog signal
Fig. 2: Noisy analog signal. Noise degrades the sinusoidal signal in Fig. 1. It is often impossible to recover the original signal exactly from the noisy version.

A digital signal is a sequence of discrete symbols. If these symbols are zeros and ones, we call them bits. As such, a digital signal is neither continuous in time nor continuous in its range of values. and, therefore, cannot perfectly represent arbitrary analog signals. On the other hand, digital signals are resilient against noise.

Figure 3

Analog transmission of a digital signal
Fig. 3: Analog transmission of a digital signal. Consider a digital signal $100110$ converted to an analog signal for radio transmission. The received signal suffers from noise, but given sufficient bit duration $T_b$, it is still easy to read off the original sequence $100110$ perfectly.

Digital signals can be stored on digital media (like a compact disc) and manipulated on digital systems (like the integrated circuit in a CD player). This digital technology enables a variety of digital processing unavailable to analog systems. For example, the music signal encoded on a CD includes additional data used for digital error correction. In case the CD is scratched and some of the digital signal becomes corrupted, the CD player may still be able to reconstruct the missing bits exactly from the error correction data. To protect the integrity of the data despite being stored on a damaged device, it is common to convert analog signals to digital signals using steps called sampling and quantization.


Sampling is the process of recording an analog signal at regular discrete moments of time. The sampling rate $f_s$ is the number of samples per second. The time interval between samples is called the sampling interval $T_s=1/f_s$.

Figure 4

Sampling
Fig. 4: Sampling. The signal $v(t)=\cos(2\pi ft)$ in Fig. 1 is sampled uniformly with 3 sampling intervals within each signal period $T$. Therefore, the sampling interval $T_s=T/3$ and the sampling rate $f_s=3f$. Another way see that $f_s=3f$ is to notice that there are three samples in every signal period $T$.

To express the samples of the analog signal $v(t)$, we use the notation $v[n]$ (with square brackets), where integer values of $n$ index the samples. Typically, the $n=0$ sample is taken from the $t=0$ time point of the analog signal. Consequently, the $n=1$ sample must come from the $t=T_s$ time point, exactly one sampling interval later; and so on. Therefore, the sequence of samples can be written as $v[0] = v(0),$ $v[1] = v(T_s),$ $v[2] = v(2T_s),\ldots$
\begin{align}
v[n] &= v(nT_s) & &\text{for integer }n
\end{align}
In the example of Fig. 4, $v(t)=\cos(2\pi ft)$ is sampled with sampling interval $T_s=T/3$ to produce the following $v[n]$.
\begin{align}
v[n] &= \cos(2\pi fnT_s) & & \text{by substituting }t=nT_s\\
&= \cos\left(2\pi f n \frac{T}{3}\right) & & \text{since }T_s=\frac{T}{3}\\
&= \cos\left(\frac{2\pi n}{3}\right) & & \text{since }T=\frac{1}{f}
\end{align}
This expression for $v[n]$ evaluates to the sample values depicted in Fig. 4 as shown below.
\begin{aligned}
v[0] &=\cos\left(0\right)= 1\\
v[1] &=\cos\left(\frac{2\pi}{3}\right)= -0.5\\
v[2] &=\cos\left(\frac{4\pi}{3}\right)= -0.5\\
v[3] &=\cos\left(2\pi\right)= 1\\
&\vdots
\end{aligned}

Figure 5

Samples
Fig. 5: Samples. The samples from Fig. 4 are shown as the sequence $v[n]$ indexed by integer values of $n$.



If a sinusoidal signal is sampled with a high sampling rate, the original signal can be recovered exactly by connecting the samples together in a smooth way (called ideal low pass filtering).

Figure 6

Sampling at a high rate
Fig. 6: Sampling at a high rate. The signal $v(t)=\cos(2\pi ft)$ in Fig. 1 is sampled uniformly with 12 sampling intervals within each signal period $T$. Therefore, the sampling interval $T_s=T/12$ and the sampling rate $f_s=12f$. The original signal $v(t)$ can be recovered from the samples by connecting them together smoothly.

In contrast, if a sinusoidal signal is sampled with a low sampling rate, the samples may be too infrequent to recover the original signal.

Figure 7

Sampling at a low rate
Fig. 7: Sampling at a low rate. The signal $v(t)=\cos(2\pi ft)$ in Fig. 1 is sampled uniformly with 4 sampling intervals within every 3 signal periods. Therefore, $4T_s=3T$ and the sampling rate $f_s=(4/3)f$. Notice that a different sinusoid $\cos(2\pi ft/3)$ with lower frequency $f/3$ also fits these samples. Attempting to recover $v(t)=\cos(2\pi ft)$ by ideal low pass filtering instead produces $\cos(2\pi ft/3)$ since the latter has a lower frequency. So, the sampling rate $f_s=(4/3)f$ is insufficient to recover $v(t)$ from the samples.

The question that arises is: for which values of sampling rate $f_s$ can we sample and then perfectly recover a sinusoidal signal $v(t)=\cos(2\pi ft)$? It turns out that we should sample at $f_s>2f$, twice the frequency of $v(t)$. Conversely, sampling at $f_s < 2f$ is insufficient to distinguish $v(t)$ from a lower frequency sinusoid. The sampling rate $f_s = 2f$ may or may not be be enough to recover a sinusoidal signal.

Figure 8

Sampling a cosine at the Nyquist rate
Fig. 8: Sampling a cosine at $f_s = 2f$. The signal $v(t)=\cos(2\pi ft)$ in Fig. 1 is sampled uniformly with 2 sampling intervals within each signal period $T$. Therefore, the sampling interval $T_s=T/2$ and the sampling rate $f_s=2f$. Since there is a sample at every peak and trough of the sinusoid, there is no lower frequency sinusoid that fits these samples. Therefore, $v(t)$ can be recovered exactly from the samples by ideal low pass filtering.


Figure 9

Sampling a sine at the Nyquist rate
Fig. 9: Sampling a sine at $f_s = 2f$. The signal $\sin(2\pi ft)$ is sampled uniformly with 2 sampling intervals within each signal period $T$. Therefore, the sampling interval $T_s=T/2$ and the sampling rate $f_s=2f$. Since all the samples are at the zero crossings, ideal low pass filtering produces a zero signal instead of recovering the sinusoid.

The Nyquist-Shannon sampling theorem states that the sampling rate for exact recovery of a signal composed of a sum of sinusoids is larger than twice the maximum frequency of the signal. This rate is called the Nyquist sampling rate $f_{\text{Nyquist}}$.
\begin{align}
f_s &> f_{\text{Nyquist}} = 2f_{\text{max}}
\end{align}
For example, if the signal is $7+5\cos(2\pi 440t)+3\sin(2\pi 880t)$, then the sampling rate $f_s$ should be chosen to be larger than $f_{\text{Nyquist}}=2(880)=1760 \text{ Hz}$.

To learn more about sampling and the Nyquist-Shannon theorem, read Sampling: what Nyquist didn't say, and what to do about it by Tim Wescott.


A sequence of samples like $v[n]$ in Fig. 5 is not a digital signal because the sample values can potentially take on a continuous range of values. In order to complete analog to digital conversion, each sample value is mapped to a discrete level (represented by a sequence of bits) in a process called quantization. In a $B$-bit quantizer, each quantization level is represented with $B$ bits, so that the number of levels equals $2^B$

Figure 10

3-bit quantization
Fig. 10: 3-bit quantization. Overlaid on the samples $v[n]$ from Fig. 5 is a 3-bit quantizer with 8 uniformly spaced quantization levels. The quantizer approximates each sample value in $v[n]$ to its nearest level value (shown on the left), producing the quantized sequence $v_Q[n]$. Ultimately the sequence $v_Q[n]$ can be written as a sequence of bits using the 3-bit representations shown on the right.

Observe that quantization introduces a quantization error between the samples and their quantized versions given by $e[n]=v[n]-v_Q[n]$. If a sample lies between quantization levels, the maximum absolute quantization error $|e[n]|$ is half of the spacing between those levels. For the quantizer in Fig. 10, the maximum error between levels is 0.15 since the spacing is uniformly 0.3. Note, however, that if the sample overshoots the highest level or undershoots the lowest level by more than 0.15, the absolute quantization error will be that difference larger than 0.15.

The table below completes the quantization example in Fig. 10 for $n=0, 1, 2, 3$. The 3-bit representations in the final row can be concatenated finally into the digital signal $110001001110$.

Sequence $n=0$ $n=1$ $n=2$ $n=3$
Samples $v[n]$ $1$ $-0.5$ $-0.5$ $1$
Quantized samples $v_Q[n]$ $0.9$ $-0.6$ $-0.6$ $0.9$
Quantization error $e[n]=v[n]-v_Q[n]$ $0.1$ $0.1$ $0.1$ $0.1$
3-bit representations $110$ $001$ $001$ $110$
Table 1: Quantization example.


From an article titled Shannon, Beethoven, and the Compact Disc by Kees A. Schouhamer Immink:


An audio compact disc (CD) holds up to 74 minutes, 33 seconds of sound, just enough for a complete mono recording of Ludwig von Beethoven's Ninth Symphony ("Alle Menschen werden Brüder") at probably the slowest pace it has ever been played, during the Bayreuther Festspiele in 1951 and conducted by Wilhelm Furtwängler.


CDs use a sampling rate of 44.1 kHz with 16-bit quantization for each sample. When the CD was first introduced in 1983, every 8 bits of digital signal data were encoded as 17 bits of signal and error correction data together. Given that 8 bits are 1 byte and that $2^{20}$ bytes are 1 megabyte (MB), we calculate below that the capacity of a compact disc is about 800 MB.
\begin{align}
\text{Duration of the analog signal} &= (74\text{ min}) \left( 60\frac{\text{s}}{\text{min}} \right) + 33\text{ s}\\
&= 4473 \text{ s}\\
\text{Samples in signal data} &= (4473 \text{ s})\left( 44100\frac{ \text{samples}}{\text{s}} \right)\\
& = 197300000 \text{ samples}\\
\text{Bits of digital signal data} &= (197300000 \text{ samples})\left( 16\frac{ \text{bits}}{\text{sample}} \right)\\
& = 3156000000 \text{ bits}\\
\text{Bytes of digital signal data} &= (3156000000 \text{ bits})\left( \frac{1}{8}\frac{ \text{byte}}{\text{bits}} \right)\\
& = 394500000 \text{ bytes}\\
\text{MB of digital signal data} &= (394500000 \text{ bytes})\left( \frac{1}{2^{20}}\frac{ \text{MB}}{\text{bytes}} \right)\\
& = 376.2 \text{ MB}\\
\text{MB of signal and error correction data} &= (376.2 \text{ MB})\left( \frac{17}{8}\frac{ \text{bits}}{\text{bits}} \right)\\
& = 799.5 \text{ MB}\\
\end{align}

Explore More!