# The "dB" column in WSJT-X

Discussion in 'Working Different Modes' started by KT5MR, Jun 28, 2020.

1. ### KT5MRHam MemberQRZ Page

I'm a little embarrassed that I don't really grasp what the dB column in WSJT-X is telling me.

I know that a decibel is based on a ratio of two signal levels; that is, a decibel is a measurement compared to a reference. It is my understanding that the dB column is the dB value calculated from the ratio of signal to noise.

This is where I start getting confused. Am I right to say that a value of -20 is that the decoded signal is 20 dB below the average level of noise I am receiving at that moment? (That is, the decoded signal is -20 dB lower than the reference signal, which is the noise.)

Relatedly, how does the software know what noise is? How does it distinguish between signal and noise, especially when they are very close in signal level? I don't have an electronics background, let alone a signal processing background, so I am trying to understand this only in very general terms.

2. ### AA4PBHam MemberQRZ Page

The dB column reports the S/N ratio BUT it is the signal level in the 50Hz bandwidth of the signal compared to the noise in the total 3KHz bandwidth of your receiver. That provides a good measurable reference number but the numbers are pretty amazing. When you copy a signal at -24dB that signal is 24dB below the total noise in the receiver bandwidth, but the decoder is effectively looking at the signal through a 50Hz filter so it's seeing a much lower noise level. It's much like listening to a weak CW signal in a band full of noise using a 3KHz filter. You switch in a 100Hz filter and the CW signal jumps out at you because the filter cuts out most of the noise.

KT5MR likes this.
3. ### NQ4THam MemberQRZ Page

My understanding is that's one of the reasons all the WJST-X modes are typically based on timed cycles and why clock synchronization is important. If you know there there are going to be signals at this specific point in time; then it's pretty easy to figure out the "reference level" when you know there aren't any transmissions occurring. The software doesn't know what is noise and what is signal...it's the same reason I have to remind people on audio-editing forums that the forensic audio stuff they see on TV is totally bogus. In those cases..I have to tell the software what the noise is before it can do anything.

Same basic thing here; WJST-X knows what time there should be signals, and what times there shouldn't be signals. It can take the times when it "knows" there's no transmissions...and use that as the baseline.

4. ### KP4SXPremium SubscriberQRZ Page

WSJT-X. I'm a little slow. Are you talking about the guard 'band' on either side of the transmission cycle? What if there are stations transmitting during those periods?

6. ### KT5MRHam MemberQRZ Page

Thanks all, that absolutely makes sense. I had forgotten that there is a small gap between transmissions and that the level of signal with no transmissions present would be the reference point. It also makes sense that if the value is computed from just the 50 Hz swath, then the decoder would appear some times to be working better than possible from the entire 3 kHz bandwidth.

One of the things I'm still finding strange when I look at a spectrum scopes is that we pick a signal level and then measure everything "down" from that and spend our time in negative values. Is this just convention or is there significantly more to it?

7. ### K7JEMHam MemberQRZ Page

Those negative numbers show the improvement over the complete SSB channel bandwidth, which is typically 2.5-3 KHz or so (I am not sure which bandwidth is actually referenced). If all of the digital modes use the same reference bandwidth, then the one with the lowest dB reading when decoding is the most sensitive. This takes the issue of different bandwidths and modulation schemes out of the equation, and allows a person to compare two different digital transmissions directly, to see which would do a better job at the data being passed.

The mode that can decode the lowest is not always the best choice, as it is usually slower than modes that take more signal. As an example, FT-8 may decode at -20, and PACTOR 3 might decode at +12 dB. But the PACTOR 3 can transmit more data in one second than the FT-8 can in one minute.

Last edited: Jun 29, 2020
8. ### N3KEHam MemberQRZ Page

Well, the Shannon limit is actually an Eb/No of -1.59 dB so actually even in the Eb/No sense you can decode a bit “below the noise”. With large FEC block sizes (>10kbits) and low code rates (<1/4) modern decoders can get within a fraction of a dB of that

In the sense of actual SNR where the noise bandwidth is typically defined to be that of the communications signal itself you can of course decode arbitrarily low “below the noise” with spreading. Spread spectrum signals are operationally run at SNRs of -30 dB or lower all the time. Of course their Eb/No is still above the Shannon limit by some margin. But SNR as typically defined is very low and on a spectrum analyzer it is impossible to see any difference between the signal being present or not.

Sorry to be pedantic!

KA0HCP and AG5DB like this.
9. ### AI3VHam MemberQRZ Page

Except that would be incorrect.

You talk about a "spread spectrum" sigmal being receivable "30 db" below the noise.

You are confusing the overall and instantaneous bandwidth a signal occupies.

Its a very common mistake.

Eb/no will allways be a positive number in real world gear.

Also, understand that if you have more that one station transmitting a mode such as spread spectrum, that overall chanel capacity is less than if every station has a "fixed" channel, the inevitable collisions of multiple signals is the culprit.

Also, be wary of modes with "deep search" functions, some of them can "decode" a entire message with the "reception" of a single bit, they "work" by having you enter the callsign of the station you think you are going to work before you turn on the receiver.

Rege

10. ### N3KEHam MemberQRZ Page

Everything I said is precisely correct.

You appear to think spread spectrum only means frequency hopping and are unaware of direct sequence spread spectrum. In DSSS the instantaneous bandwidth is approximately the chip rate (approximately because the chip shaping filter can adjust it a bit).

Sorry this one is unique to you! Read up on DSSS. It is actually the more common spread spectrum mode used for CDMA.

Given every real world system adds some link margin and Shannon is just a bit below 0dB, indeed you really aren’t going to find a real world system running that low.

Of course, but “collision” is the not the best way to look at it. In DSSS (what is most commonly used for multi-user CDMA systems) there are no collisions. Instead each user adds what appears to the other users to be uncorrelated random noise to the channel.

Yes the AP decoding “cheats” of course.