Aller au contenu principal

Sampling (signal processing)


Sampling (signal processing)


In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples". A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values.

A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points.

The original signal can be reconstructed from a sequence of samples, up to the Nyquist limit, by passing the sequence of samples through a reconstruction filter.

Theory

Functions of space, time, or any other dimension can be sampled, and similarly in two or more dimensions.

For functions that vary with time, let S(t) be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every T seconds, which is called the sampling interval or sampling period.  Then the sampled function is given by the sequence:

S(nT),   for integer values of n.

The sampling frequency or sampling rate, fs, is the average number of samples obtained in one second, thus fs = 1/T, with the unit samples per second, sometimes referred to as hertz, for example 48 kHz is 48,000 samples per second.

Reconstructing a continuous function from samples is done by interpolation algorithms. The Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal low-pass filter whose input is a sequence of Dirac delta functions that are modulated (multiplied) by the sample values. When the time interval between adjacent samples is a constant (T), the sequence of delta functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the comb function with s(t). That mathematical abstraction is sometimes referred to as impulse sampling.

Most sampled signals are not simply stored and reconstructed. The fidelity of a theoretical reconstruction is a common measure of the effectiveness of sampling. That fidelity is reduced when s(t) contains frequency components whose cycle length (period) is less than 2 sample intervals (see Aliasing). The corresponding frequency limit, in cycles per second (hertz), is 0.5 cycle/sample × fs samples/second = fs/2, known as the Nyquist frequency of the sampler. Therefore, s(t) is usually the output of a low-pass filter, functionally known as an anti-aliasing filter. Without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process.

Practical considerations

In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a device with various physical limitations. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion.

Various types of distortion can occur, including:

  • Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, functions can have no frequency content above the Nyquist frequency. Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter.
  • Aperture error results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant. In a capacitor-based sample and hold circuit, aperture errors are introduced by multiple mechanisms. For example, the capacitor cannot instantly track the input signal and the capacitor can not instantly be isolated from the input signal.
  • Jitter or deviation from the precise sample timing intervals.
  • Noise, including thermal sensor noise, analog circuit noise, etc.
  • Slew rate limit error, caused by the inability of the ADC input value to change sufficiently rapidly.
  • Quantization as a consequence of the finite precision of words that represent the converted values.
  • Error due to other non-linear effects of the mapping of input voltage to converted output value (in addition to the effects of quantization).

Although the use of oversampling can completely eliminate aperture error and aliasing by shifting them out of the passband, this technique cannot be practically used above a few GHz, and may be prohibitively expensive at much lower frequencies. Furthermore, while oversampling can reduce quantization error and non-linearity, it cannot eliminate these entirely. Consequently, practical ADCs at audio frequencies typically do not exhibit aliasing, aperture error, and are not limited by quantization error. Instead, analog noise dominates. At RF and microwave frequencies where oversampling is impractical and filters are expensive, aperture error, quantization error and aliasing can be significant limitations.

Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function.

Applications

Audio sampling

Digital audio uses pulse-code modulation (PCM) and digital signals for sound reproduction. This includes analog-to-digital conversion (ADC), digital-to-analog conversion (DAC), storage, and transmission. In effect, the system commonly referred to as digital is in fact a discrete-time, discrete-level analog of a previous electrical analog. While modern systems can be quite subtle in their methods, the primary usefulness of a digital system is the ability to store, retrieve and transmit signals without any loss of quality.

When it is necessary to capture audio covering the entire 20–20,000 Hz range of human hearing such as when recording music or many types of acoustic events, audio waveforms are typically sampled at 44.1 kHz (CD), 48 kHz, 88.2 kHz, or 96 kHz. The approximately double-rate requirement is a consequence of the Nyquist theorem. Sampling rates higher than about 50 kHz to 60 kHz cannot supply more usable information for human listeners. Early professional audio equipment manufacturers chose sampling rates in the region of 40 to 50 kHz for this reason.

There has been an industry trend towards sampling rates well beyond the basic requirements: such as 96 kHz and even 192 kHz Even though ultrasonic frequencies are inaudible to humans, recording and mixing at higher sampling rates is effective in eliminating the distortion that can be caused by foldback aliasing. Conversely, ultrasonic sounds may interact with and modulate the audible part of the frequency spectrum (intermodulation distortion), degrading the fidelity. One advantage of higher sampling rates is that they can relax the low-pass filter design requirements for ADCs and DACs, but with modern oversampling delta-sigma-converters this advantage is less important.

The Audio Engineering Society recommends 48 kHz sampling rate for most applications but gives recognition to 44.1 kHz for CD and other consumer uses, 32 kHz for transmission-related applications, and 96 kHz for higher bandwidth or relaxed anti-aliasing filtering. Both Lavry Engineering and J. Robert Stuart state that the ideal sampling rate would be about 60 kHz, but since this is not a standard frequency, recommend 88.2 or 96 kHz for recording purposes.

A more complete list of common audio sample rates is:

Bit depth

Audio is typically recorded at 8-, 16-, and 24-bit depth, which yield a theoretical maximum signal-to-quantization-noise ratio (SQNR) for a pure sine wave of, approximately, 49.93 dB, 98.09 dB and 122.17 dB. CD quality audio uses 16-bit samples. Thermal noise limits the true number of bits that can be used in quantization. Few analog systems have signal to noise ratios (SNR) exceeding 120 dB. However, digital signal processing operations can have very high dynamic range, consequently it is common to perform mixing and mastering operations at 32-bit precision and then convert to 16- or 24-bit for distribution.

Speech sampling

Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a much lower rate. For most phonemes, almost all of the energy is contained in the 100 Hz – 4 kHz range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony systems, which use the G.711 sampling and quantization specifications.

Video sampling

Standard-definition television (SDTV) uses either 720 by 480 pixels (US NTSC 525-line) or 720 by 576 pixels (UK PAL 625-line) for the visible picture area.

High-definition television (HDTV) uses 720p (progressive), 1080i (interlaced), and 1080p (progressive, also known as Full-HD).

In digital video, the temporal sampling rate is defined as the frame rate – or rather the field rate – rather than the notional pixel clock. The image sampling frequency is the repetition rate of the sensor integration period. Since the integration period may be significantly shorter than the time between repetitions, the sampling frequency can be different from the inverse of the sample time:

  • 50 Hz – PAL video
  • 60 / 1.001 Hz ~= 59.94 Hz – NTSC video

Video digital-to-analog converters operate in the megahertz range (from ~3 MHz for low quality composite video scalers in early games consoles, to 250 MHz or more for the highest-resolution VGA output).

When analog video is converted to digital video, a different sampling process occurs, this time at the pixel frequency, corresponding to a spatial sampling rate along scan lines. A common pixel sampling rate is:

  • 13.5 MHz – CCIR 601, D1 video

Spatial sampling in the other direction is determined by the spacing of scan lines in the raster. The sampling rates and resolutions in both spatial directions can be measured in units of lines per picture height.

Spatial aliasing of high-frequency luma or chroma video components shows up as a moiré pattern.

3D sampling

The process of volume rendering samples a 3D grid of voxels to produce 3D renderings of sliced (tomographic) data. The 3D grid is assumed to represent a continuous region of 3D space. Volume rendering is common in medical imaging, X-ray computed tomography (CT/CAT), magnetic resonance imaging (MRI), positron emission tomography (PET) are some examples. It is also used for seismic tomography and other applications.

Undersampling

When a bandpass signal is sampled slower than its Nyquist rate, the samples are indistinguishable from samples of a low-frequency alias of the high-frequency signal. That is often done purposefully in such a way that the lowest-frequency alias satisfies the Nyquist criterion, because the bandpass signal is still uniquely represented and recoverable. Such undersampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF to digital conversion.

Oversampling

Oversampling is used in most modern analog-to-digital converters to reduce the distortion introduced by practical digital-to-analog converters, such as a zero-order hold instead of idealizations like the Whittaker–Shannon interpolation formula.

Complex sampling

Complex sampling (or I/Q sampling) is the simultaneous sampling of two different, but related, waveforms, resulting in pairs of samples that are subsequently treated as complex numbers.  When one waveform , s ^ ( t ) , {\displaystyle ,{\hat {s}}(t),}   is the Hilbert transform of the other waveform , s ( t ) , {\displaystyle ,s(t),\,}   the complex-valued function,   s a ( t ) s ( t ) + i s ^ ( t ) , {\displaystyle s_{a}(t)\triangleq s(t)+i\cdot {\hat {s}}(t),}   is called an analytic signal,  whose Fourier transform is zero for all negative values of frequency. In that case, the Nyquist rate for a waveform with no frequencies ≥ B can be reduced to just B (complex samples/sec), instead of 2B (real samples/sec). More apparently, the equivalent baseband waveform,   s a ( t ) e i 2 π B 2 t , {\displaystyle s_{a}(t)\cdot e^{-i2\pi {\frac {B}{2}}t},}   also has a Nyquist rate of B, because all of its non-zero frequency content is shifted into the interval [-B/2, B/2).

Although complex-valued samples can be obtained as described above, they are also created by manipulating samples of a real-valued waveform. For instance, the equivalent baseband waveform can be created without explicitly computing s ^ ( t ) , {\displaystyle {\hat {s}}(t),}   by processing the product sequence , [ s ( n T ) e i 2 π B 2 T n ] , {\displaystyle ,\left[s(nT)\cdot e^{-i2\pi {\frac {B}{2}}Tn}\right],}  through a digital low-pass filter whose cutoff frequency is B/2. Computing only every other sample of the output sequence reduces the sample-rate commensurate with the reduced Nyquist rate. The result is half as many complex-valued samples as the original number of real samples. No information is lost, and the original s(t) waveform can be recovered, if necessary.

See also

  • Crystal oscillator frequencies
  • Downsampling
  • Upsampling
  • Multidimensional sampling
  • In-phase and quadrature components and I/Q data
  • Sample rate conversion
  • Digitizing
  • Sample and hold
  • Beta encoder
  • Kell factor
  • Bit rate
  • Normalized frequency

Notes

References

Further reading

  • Matt Pharr, Wenzel Jakob and Greg Humphreys, Physically Based Rendering: From Theory to Implementation, 3rd ed., Morgan Kaufmann, November 2016. ISBN 978-0128006450. The chapter on sampling (available online) is nicely written with diagrams, core theory and code sample.

External links

  • Journal devoted to Sampling Theory
  • I/Q Data for Dummies – a page trying to answer the question Why I/Q Data?
  • Sampling of analog signals – an interactive presentation in a web-demo at the Institute of Telecommunications, University of Stuttgart

Text submitted to CC-BY-SA license. Source: Sampling (signal processing) by Wikipedia (Historical)



ghbass