Direct sampling to bandwidth output, how done?

Discussion in 'Software Defined Radio (SDR)' started by KX4AZ, Aug 1, 2020.

ad: L-HROutlet
ad: l-rl
ad: Subscribe
ad: L-MFJ
ad: FBNews-1
ad: Left-2
ad: Left-3
  1. KX4AZ

    KX4AZ Ham Member QRZ Page

    (I posted this same query at the rtl-sdr.com forum, let's see which forum "wins" the game!)

    This is fairly deep technical question that applies to all SDRs, purely my curiosity, posting on the chance there is an SDR expert out there who can answer it in a simple manner, or post a hyperlink...

    I have always wondered how the raw ADC samples in the RTL-SDR dongle (28.8 megasamples/second?) are processed and downsampled to the lower rates needed to transfer data out of the dongle to the PC software for a given bandwidth setting. In other words, a 2 MHz bandwidth setting requires a lot more bits/second from the USB connection than a 1 MHz setting, but how exactly is the original raw data from the ADC sampling transformed in the dongle? Is a FFT done, followed by filtering at the selected the bandwidth, then a reverse transform? Or some kind of mathematical averaging on the entire stream to downconvert it?
     
  2. N6YWU

    N6YWU Ham Member QRZ Page

    The mathematically correct solution is to use a perfect band-limiting anti-aliasing or low-pass filter before decimation or interpolation. In reality, no filter is perfect. So an anti-aliasing filter should have a stop band attenuation below the radio's desired noise floor at and above the new downsampled sample rate.

    Anti-alias filtering can be implemented in software or hardware by many different methods, including using a high-order IIR filter, or using a windowed Sinc or computer optimized FIR filter. An IIR is usually implemented by sections of recursive arithmetic (biquads, corresponding to sections of analog LC filters). A IIR works similar to averaging, but due to feedback and multiple sections, is a lot smoother in the frequency domain. A FIR filter can be implemented by straight convolution (a lot of vector arithmetic), perhaps using a polyphase filter bank or Farrow interpolator for each set of FIR filter coefficients. or by overlap-add/save FFT/IFFT fast convolution for greater efficiency. There are other solutions including CIC filters and incremental multi-rate filtering and downsampling (reducing the sample rate in steps).

    For fractional sample rate ratio changes, one can also upsample before downsampling, but I prefer polyphase interpolation methods to upsampling as more computationally efficient.

    Band-pass filtering plus under-sampling can also be used for downsampling a tuned spectrum slice. But, more commonly, heterodyning down to complex or IQ baseband before low-pass filtering seems to be a more efficient and commonly implemented algorithm.
     
    Last edited: Aug 3, 2020 at 4:40 PM
  3. KX4AZ

    KX4AZ Ham Member QRZ Page

    N6YWU,
    Thank you for taking the time to respond with such a detailed technical answer. Alas, your knowledge is WAY above my current ability to fully comprehend what you have written, so let me compose an alternative scenario/question that may help to refine any additional answer(s) to this thread, using the RTL-SDR dongle's direct sampling mode...

    In the direct sampling mode the dongle allows me to select the "Q branch" output which I believe is a 28.8. megasample/second, 8 bits/sample, stream coming out of the ADC, whose input is directly connected to the HF antenna, with no prefiltering other than a simple 25MHz low pass diplexer. So the ADC output is a direct sample of the antenna without a heterodyne mixer that is used for the >25 MHz frequencies. However, that 28.8 megasamples/second bitstream is the internal raw sampling inside the dongle that the SDR software never sees. I have to set a desired bandwidth to be output from the dongle of up to around 2 MHz, and use the software to view a 2 MHz slice in the 0-14.4 MHz range. In reality I can still receive signals beyond the Nyquist sampling defined limit, and of course via aliasing(?) everything "folds back" from the 28.8 limit....so for example a broadcast 1340 kHz AM signal will appear at 28.8-1.34 = 27.46 MHz. What I am trying to understand (and which I you tried hard to spell out for me!) is how the raw bitstream is downconverted to the 2 MHz slice of spectrum to be viewed. Is that downsampling operation essentially an antilasing filter applied to the raw data, then moved up and down for the desired window etc??? Or what type of mathematical operation is being done - is that what decimation is? Hopefully these questions illustrate my current level of mis-understanding ;>.
     
  4. N6YWU

    N6YWU Ham Member QRZ Page

    Decimation means throwing away samples. If you throw enough away, you end up with less (e.g. a lower sample rate). The math says that if the signal is sufficiently band-limited to only contain spectrum below (or within) a certain bandwidth, you can throw away samples (resulting in less) without losing any information or causing any aliasing.

    Without first band-limiting, you end up with artifacts (noise and aliases).

    If you want a slice of frequency above baseband (0 Hz), then a (software) mixer is usually employed, even given baseband sampling.
     

Share This Page