Est. read time: 7 minutes | Last updated: November 07, 2024 by John Gentile


Contents

Overview

Digital Signal Processing (DSP) provides a means for analyzing & constructing signals in the digital domain, often done in digital hardware or in software; as opposed to the analog domain, the continual progression of smaller and faster digital hardware allows for signal processing systems that are often cheaper, more reliable and even more adaptable than their hard-wired analog counterpart. Albeit practical, these benefits can sometimes come at the cost of precision due to the nature of converting between analog and digital domains. In practice though, intelligent application of DSP concepts can reduce these distortion effects, and in some cases, actually improve & condition the signal for an intended purpose.

Signals

A signal is a pretty general term; it can describe a set of data pertaining to almost any sort of application or industry- from the electrical signal passing internet bits around to stock ticker prices driving world markets. Systems process signals by modifying, passing and/or extracting information from them.

Ex=x(t)2dtE_{x} = \int_{-\infty}^{\infty} \left | x(t) \right | ^{2} dt

Fundamentally a signal is any quantity that varies with one or more independent variables- like voltage changing over time. A signal may or may not be able to be mathematically or functionally defined; for example, the natural waveform from recorded music. In the rest of the discussion time is used for discussion of common signals but can be replaced with any other independent variable. Frequency is inversely related to the period of a signal (f=1/Tf=1/T). Common periodic signals that vary with time (tt) can be described with three time-varying attributes: amplitude (AA), frequency (ff) and phase (θ\theta). For example a basic sinusoid:

x(t)=Acos(Ωt+θ),<t<x(t)=A\cos(\Omega t + \theta ), -\infty < t < \infty

Where Ω\Omega is the frequency in (radians/sec) and related to Hz frequency, ff, (cycles/sec) by the simple conversion Ω=2πf\Omega=2\pi f. Note a periodic signal satisfies the criteria x(t)=x(t±nT)x(t)=x(t \pm nT) where nn is any integer number. It’s also important to describe sinusoids in complex exponential form, also known as a phasor:

x(t)=Aej(Ωt+θ)x(t)=Ae^{j(\Omega t + \theta)}

Which given the Euler identity, e±jϕ=cosϕ±jsinϕe^{\pm j \phi}=\cos \phi \pm j\sin \phi, gives:

x(t)=Acos(Ωt+θ)=A2ej(Ωt+θ)+A2ej(Ωt+θ)x(t)=A\cos (\Omega t + \theta) = \frac{A}{2}e^{j(\Omega t + \theta)} + \frac{A}{2}e^{-j(\Omega t + \theta)}

Since the goal of DSP is to operate on signals within the digital domain, to deal with real/natural, analog signals we must digitize them; the common component that accomplishes this is unsurprisingly called an Analog to Digital Converter (ADC or A/D Converter). Most ADCs operate on electrical signals which come from a transducer or other electrical source. The inverse of an ADC is a Digital to Analog Converter (DAC or D/A Converter) which takes digital signals and produces an analog signal as an output. The conversion chain from analog to digital also provides an opportunity to explain further signal classifications, starting with continuous time vs discrete time signals.

Continuous Time vs Discrete Time

Continuous Time signals are synonymous with real analog signals. They take on defined values for the continuous interval <t<-\infty < t < \infty such as the signal x(t)=cos(πt)x(t)=\cos(\pi t). Discrete Time signals have value only in certain instances of time, usually as a finite set of equidistant time intervals for easier calculations. For example the previous continuous time signal can be represented as a discrete time signal x(n)=cos(πn)x(n)=\cos(\pi n) where n=0,±1,±2,...n = 0, \pm 1, \pm 2,...

Similarly to the periodic definition of continuous time signals, a discrete time signal is periodic when x(n)=x(n+N)x(n)=x(n + N) where the smallest value of NN that holds true is the fundamental period. A sinusoidal signal with frequency f0f_{0} is periodic when:

cos[2πf0(N+n)+θ]=cos(2πf0n+θ)\cos \left [ 2\pi f_{0}(N+n) + \theta \right ] = \cos ( 2\pi f_{0} n + \theta )

Since there’s the trigonetric identity that any sinusoid shifted by an integer multiple of 2π2\pi is equal to itself (e.g. cos(x)=cos(x±2πN)\cos(x)=cos(x \pm 2\pi N), we can reduce the discrete time sinusoid relation to having some integer kk such that:

2πf0N=2πkf0=kN2\pi f_{0} N = 2\pi k \therefore f_{0}=\frac{k}{N}

The same identity (sinusoids seperated by an integer multiple of 2π2\pi are identical) expressed cos(ω0n+θ)=cos(ω0n+2πn+θ)\cos(\omega_{0}n + \theta)=\cos(\omega_{0}n + 2\pi n + \theta) means that when frequency ω>π\left | \omega \right | > \pi (or f>12\left | f \right | > \frac{1}{2}) the sequence is indistinguishable- called an alias- from a sinusoid with frequency ω<π\left | \omega \right | < \pi. Thus discrete time sinusoids have the finite fundamental range of a single cycle 1f1f or 2π2\pi radians; usually this is given as a range πωπ-\pi \leq \omega \leq \pi, 0ω2π0 \leq \omega \leq 2\pi, 12f12-\frac{1}{2} \leq f \leq \frac{1}{2}, or 0f10 \leq f \leq 1.

In an ADC, the process of converting a signal from continuous time to discrete time is sampling and is commonly done at some regular sampling interval TT which is inverse of the sampling frequency fs=1/Tf_{s}=1/T. Continuous and discrete time are related in ideal, uniform sampling via t=nT=nfst = nT = \frac{n}{f_{s}} Thus, sampling a sinusoidal signal gives:

x(t)=Acos(2πf0t+θ)sampledx(nT)x(n)=Acos(2πf0nfs+θ)x(t) = A\cos(2\pi f_{0}t + \theta) \overset{sampled}{\rightarrow} x(nT) \equiv x(n) = A\cos(\frac{2\pi f_{0}n}{f_{s}} + \theta)

Given the expression for this discrete time sampled sinusoid, the frequency f=f0fsf = \frac{f_{0}}{f_{s}}; substituting this frequency into the aforementioned fundamental range 12f12-\frac{1}{2} \leq f \leq \frac{1}{2} yields:

fs2f0fs2-\frac{f_{s}}{2} \leq f_{0} \leq \frac{f_{s}}{2}

This is an important assertion that means the frequency of a continuous time analog signal must be \leq half the sampling frequency to be uniquely distinguished (and not aliased). Furthermore, the highest frequency discernible fmax=fs2f_{max}=\frac{f_{s}}{2}.

The problem with alias signals can be demonstrated with the sinusoids:

x1(t)=cos(10πt)x_{1}(t) = \cos(10\pi t) x2(t)=cos(50πt)x_{2}(t) = \cos(50\pi t)

Sampled at fs=20Hzf_{s}=20 Hz, the discrete time output signals are:

x1(n)=cos(1020πn)=cos(π2n)x_{1}(n) = \cos(\frac{10}{20}\pi n) = \cos(\frac{\pi}{2}n) x2(n)=cos(5020πn)=cos(5π2n)x_{2}(n) = \cos(\frac{50}{20}\pi n) = \cos(\frac{5\pi}{2}n)

Given cos(5πn/2)=cos(2πn+πn/2)\cos(5\pi n /2) = \cos(2\pi n + \pi n/2), and again, integer multiples of 2πn2\pi n can be reduced out, x1(n)x2(n)x_{1}(n) \equiv x_{2}(n) and thus x2(n)x_{2}(n) is aliased as an indistinguishable signal from x1(n)x_{1}(n). Therefore, to prevent the problem of aliasing, a proper sampling frequency must be chosen to satisfy the highest frequency content found in a signal:

fs>2fmaxf_{s} > 2f_{max}

This is known as the Nyquist Sampling Criterion.

Continuous Value vs Discrete Value

Similarly to continuous time, a signal is of Continuous Value if it can have any value, whether in a finite or infinite range; for instance, a certain voltage may realistically fall within the finite range of 0V to 5V but would be continuous value signal if it can take the value of any real voltage within that range (e.g. 2.31256…V). Conversely, a Discrete Value signal can only be within a finite set of values within a finite range (e.g. 1V integer levels within a 10V range). The process in an ADC of converting a continuous value, discrete time signal (output from the sampler) into a discrete value, discrete time signal is called quantization. A signal that is discrete in both time and value is considered a digital signal.

Linear Convolution

Visual comparison of convolution, cross-correlation and autocorrelation. From Wikipedia

Continuous-Time Convolution

A fundamental operation in signal processing is convolution. When a function ff is convolved with another function gg, a third output function fgf*g expresses how the shape of one is modified by the other. Convolution is defined as the integral of the product of both functions, after one function is reversed and shifted; for example, in continuous time this is written as:

(fg)(t)=f(τ)g(tτ)dτ(f * g)(t) = \int_{-\infty}^{\infty} f(\tau)g(t - \tau) \, d\tau

Convolution is also commutative, thus the above equation can also be expressed as:

(fg)(t)=f(tτ)g(τ)dτ(f * g)(t) = \int_{-\infty}^{\infty} f(t - \tau)g(\tau) \, d\tau

A common convolution is that of a signal, f(t)f(t), with a filter g(t)g(t), to form a filtered output signal y(t)y(t).

y(t)=f(t)g(t)y(t) = f(t) * g(t)

Geometrically, the convolution process can be seen as:

  1. Forming a time-reversed image g(τ)g(-\tau) of the filter function g(τ)g(\tau), where τ\tau is a dummy variable.
  2. Sliding g(τ)g(-\tau) across the signal function f(τ)f(\tau) with a specific delay tt.
  3. At each delay position tt, the integral of the product of g(tτ)g(t - \tau) and the part of the signal f(τ)f(\tau) covered by the filter. This integral is also called the inner-product (or dot product).

Another property of convolution is that it is a linear operation; the superposition principle means that the convolution of the sum of input signals is equal to the sum of the individual convolutions. For instance, the equivalence can be shown with two input signals s1(t)s_{1}(t) and s2(t)s_{2}(t) to be convolved with a filter h(t)h(t), where α\alpha and β\beta are arbitrary constants:

[αs1(t)+βs2(t)]h(t)=αs1(t)h(t)+βs2(t)h(t)\left[ \alpha s_{1}(t) + \beta s_{2}(t) \right] * h(t) = \alpha s_{1}(t) * h(t) + \beta s_{2}(t) * h(t)

Continuous-Time Correlation

Correlation (or sometimes noted as cross-correlation) is similar to convolution; it has the same geometrical property of a sliding inner-product, however the sliding function is not time reversed but instead, complex conjugated (^{*}).

(fg)(t)=f(t)g(t+τ)dt(f * g)(t) = \int_{-\infty}^{\infty} f^{*}(t)g(t + \tau) \, dt

Correlation is not commutative, however the equivalent of the above is:

(fg)(t)=f(tτ)g(t)dt(f * g)(t) = \int_{-\infty}^{\infty} \overline{f(t - \tau)}g(t) \, dt

NOTE: the second equation shows another way to denote the complex conjugate operation, thus f(t)f^{*}(t) and f(t)\overline{f(t)} are equivalent.

Cross-correlation is commonly used for searching a long signal for a shorter, known feature.

Transforms

Fourier Series

The time domain function s(x)s(x) is shown here as a sum of six sinusoids at different amplitudes but odd harmonic frequencies (approximating a square wave). The resulting Fourier transform S(f)S(f) can be shown where the component sinusoiuds lie in the frequency domain and their relative amplitudes:

Source: Trigonometry- Wikipedia

References