Does FCC permit General+ to do "non-MORSE" CW?

Discussion in 'Working Different Modes' started by N4VDI, May 11, 2021.

Tags:
ad: L-HROutlet
ad: l-rl
ad: HRDLLC-2
ad: Left-3
ad: MessiPaoloni-1
ad: Left-2
ad: L-Geochron
ad: abrind-2
ad: L-MFJ
  1. N4VDI

    N4VDI Ham Member QRZ Page

    I should also mention... there's no hard need for a usb "audio chip" per se... in a pinch, even an 8-bit AVR microcontroller would probably do. About 12 years ago (god, I feel old), Dean Camera figured a way to literally bitbang USB1.1 using GPIO on all but the most limited-capability AVR microcontrollers (though I think most of their chips since ~2016 have had on-die USB).

    The main advantage of going the usb-audio route is that Windows & Linux have driver-free support for it, and will try harder to handle it with low latency. I think by default, Windows & Linux use isochronous mode for audio.

    With a microcontroller and usb serial port, it can be a challenge to get low-latency & high speeds, and requires a lot more tweaking & configuration. With USB audio, it Just Works.
     
    Last edited: Jun 6, 2021
  2. N6YWU

    N6YWU Ham Member QRZ Page

    Minimalist?! Count the transistors in those monoliths and op amps.

    QST used to have articles on building a transmitter with a single transistor (maybe way up to 2!) and one crystal. Or a 1 vacuum tube transmitter (the dual plate one used as the rectifier).

    If you want to use a lot of transistors (or the order of billions) for receive, use a computer (Raspberry Pi, Mac or PC) with one of those $15 USB dongles. The cheapest RTL-SDRs require maybe a 1 jumper mod to become direct sampling HF receivers for everything from OOK CW to FST4W.
     
  3. N6YWU

    N6YWU Ham Member QRZ Page

    No new alphabet is needed for most of the benefits of digital modes (except bandwidth). You could even use a small subset of standard ITU International Morse Code characters to build a group coded message. Publish the encoding, add your callsign at the end, and it's legal human copyable CW.
     
  4. N4VDI

    N4VDI Ham Member QRZ Page

    In theory, you could probably use an ESP32 as a really, really godawful CW-transmitter for 80-17 meters. Its DAC (in LED-PWM mode) can go up to 40MHz in 1-bit mode, up to 20MHz in 2-bit mode, and 8MHz in 5-bit mode.
    • 8MHz is high enough to directly output ~3.5MHz with 32 potential amplitude levels.
    • 20MHz is high enough to output 2-bit modified sinewave for 7Mhz.
      • I'm not 100% sure, but I think I read somewhere that if you're generating CW, the main consequence of sub-Nyquist aliasing is that it would make the output really, really "dirty". Essentially, you'd be generating output at half the desired frequency, then filtering it away with a high-pass filter and amplifying & transmitting its first harmonic (through a low-pass filter further downstream). I think this is actually how a NanoVNA does 1.5-3.0GHz.
    • 40MHz is theoretically high enough to output 1-bit PWM on 30, 20, and 17m. But I know this would be just about the most awful output you could possibly transmit. With a low-pass filter and at milliwatt levels, it might be OK for transmitting something across your desk to a receiver a few feet away, but probably isn't something you'd ever want to transmit with enough power to affect anyone more than 10-20 miles away.
    I remember seeing a Youtube video made by a guy who built a CW transmitter using a 555 (as the oscillator) and a LM386 as the amplifier.

    I think I remember seeing somewhere that the single-chip UHF SSB-capable transmitter chips sold by someone (Analog? NatSemi?) can unofficially be pushed to do HF, too, but nobody does it because the chips themselves are too expensive to waste on a radio that would work poorly.
     
    Last edited: Jun 6, 2021
  5. AC0GT

    AC0GT Ham Member QRZ Page

    Agreed, that's one way to do it. Perhaps calling it a new alphabet isn't the most accurate but then what should it be called? It's not a new mode, it's still OOK. It's not exactly Morse code either, the individual characters may exist in the Morse code alphabet but they are not used in the same way. Calling it a "code" may be accurate but could imply an intent to obfuscate the meaning, and that's not kosher by FCC rules and international treaty. Then again "Morse code" doesn't upset people. Neither does American Standard Code for Information Interchange, but most people just know it as ASCII.

    This is an encoding, as you say, and publishing the code would certainly mean it is not obfuscating anything. Calling it Morse code though is a stretch. I would like to see this put to use to test the limits on what the FCC thinks on their current rules for Technician. If we don't get new rules on licensing soon then we need to force the point with the FCC through more "creative" means.

    If this upsets the Morse code "purist" curmudgeons then all the better. That's just icing on the cake for me.

    I expect that limiting the "alphabet" or "code" to existing characters in the IMC set could place potentially severe limits on any new OOK system. If that speeds adoption by circumventing some legal and technical barriers then I'd consider that acceptable. Converting from plain text to the subset of IMC characters that fit the code and back would be a simple search and replace script. Sending and receiving can then use any current keyboard to IMC software with some scripts. Adding this into existing software code becomes relatively trivial. Putting in FEC is then just some added bits to the script.

    There's nothing that requires one version of this new code. There can be a version that limits itself to a subset of the IMC alphabet, and one that does not. One version can use ASCII as the input and output characters, another can use Unicode for a larger set of characters. Well, since UTF (Unicode Transformation Format) includes ASCII then this new code can easily "grow" as the details are developed. UTF-8 is (essentially if not literally) another name for ASCII. UTF-16 adds many more characters, and as I understand it is a superset of UTF-8. This simplifies the "growth" as it should be possible for the code that is limited to the IMC alphabet can be a subset of the code that does not, and the code that does ACSII/UTF-8 is a subset of the code that does UTF-16. The IMC-only and ACSII-only subset may or may not be the same thing. Also correlating UTF-16 to some "extended IMC" alphabet may or may not be required. With the point made a few pages back about extending IMC with new "prosigns" there is an argument of any alphabet being used as still being inside the IMC alphabet. Given there are a number of existing extensions to IMC for non-Latin characters this is a stronger argument.

    I may be repeating myself from previous posts so forgive me if this sounds all too familiar. I do think it important for issues of ease of decoding that the OOK sequences for each "letter" in the "alphabet" should take the same time to send. This doesn't mean having the same number of "dits" and "dahs", though that would work, only that from start to finish each "letter" take the same time to send. If the "alphabet" is built up from something like 16 or 32 "letters", correlating to 4 or 5 bits, then finding valid IMC characters to support that this is still IMC may be trivial. Just construct the code without regard for any IMC characters and there's likely going to be an existing IMC character to match, though again perhaps a few may need to "borrow" from a less commonly used characters outside the formally recognized IMC alphabet.

    I like this idea because it is using OOK in a new way, giving an old mode (and with it old hardware) a new life. With such simple modulation there's a simple interface to the computer, it needs only one "bit" or wire to interface the transmitting. Getting audio into a computer is trivial and so doesn't carry the same issues, and the sending and receiving computers don't need to be the same. The sending and receiving radios don't need to be the same.

    One more thing...
    There was an idea posed to put what could be called "metadata" in a phase or frequency shift of the OOK. I don't know if that is wise as then it's complicating things and starting to look like other existing modes. Maybe it can add some error correction. Perhaps it can help in the ID of the user, location of the station, the mode being used, or something. I'll just say I need some details to be convinced of the utility.
     
  6. N6YWU

    N6YWU Ham Member QRZ Page

    Morse Code is Morse Code. It's already commonly used over the air for Q-codes, grid squares, oddball contest exchanges, abbreviations, and a whole bunch of other commonly encoded meanings (e.g. any non-English words in U.S. QSOs). Using ITU Morse Code characters to send a hexadecimal S-record (see last months ARRL QEX article on doing just that) or other string containing sync markers or forward error correction codes would be no different. As long the full message format is public and well published, there is no obfuscation or secret key encryption.

    There are tons of university textbooks and academic papers on timing and symbol recovery. For synchronous CW decoding, start reading.
     
  7. AF7TS

    AF7TS Ham Member QRZ Page

    I'm not going to address the many good points in AC0GT's post, but I just wanted to focus on the above.

    Given that many popular computer digital modes use a variable length 'alphabet' I don't believe that keeping a constant timing alphabet is a huge benefit for decoding.

    If the goal is to send English text, then the variable length of the IMC alphabet is a reasonable 'compression' scheme.

    On the other hand, if the goal is to transmit arbitrary binary files, then I agree that IMC coding doesn't make much sense.

    I know that I've been arguing that IMC sent using on-off keying with something like FM or MFM channel coding meets the letter of the law (counts as 'CW' per FCC rules), I am sort of making the counter argument here.

    Imagine a simple arbitrary content digital mode as follows:
    Every 64 bits of data is combined with 16 bits of redundancy, interleaved, and expressed as 80 bits to get sent down the channel.
    These 80 bits gets represented as I or T; 1 = di dit, 0 = dah. In other words the channel code is 1-> 101000 0->111000
    This list of 80 letters is packaged with call signs, eg. N6YWU DE AF7TS ITTTIIITITTTIIIIITIITIT.....

    Not trying to claim that this is a great digital mode. I am simply presenting an example that probably also fits the 'letter of the law' (on-off keying using international morse code) that seems to strongly violate the entire spirit of the special place for 'CW' (a true digital mode made for direct human brain interpretation).

    I guess what this boils down to is that I really like the idea of tweaks to OOK morse code to make it more machine usable, and like the idea of exploring different modulation approaches that might make OOK morse more robust for human interpretation, but really dislike the idea of trying to 'sneak' a non-human usable digital mode into the rules for CW.

    73
    Jon
    AF7TS
    While I have been arguing that IMC sent using on-off keying with something like FM or MFM channel coding meets the letter of the law as counstill counts as 'CW' per FCC rules, and IMHO meets the _spirit_ of the rules
     
    AC0GT likes this.
  8. NQ1B

    NQ1B Ham Member QRZ Page

    Okay, that's certainly not what I mean when I say minimalist.

    Minimalist means:
    - Minimum cost (fewest components to purchase)
    - Minimum work to build (fewest discrete components to solder or attach)

    Who cares how many components are etched into the monolithor op-amp? We don't buy them that way, and we don't asssemble them that way.

    You might as well argue that a tuning capacitor is not one component, but each rotor and stator plate counts as a single component. "That tuning capacitor has 20 components!" And there may have been a time when it made sense to think that way, because the builder might well have had to assemble that tuning capacitor from plates!

    But in 2021? A monolithic amp IC is one component, period.
     
  9. NQ1B

    NQ1B Ham Member QRZ Page

    Assuming that we want the system we are discussing to decode as well as encode the transmission, it's going to take more than one switch. The audio has to come from the radio receiver through some kind of interface into the computing section for decoding, whether decoding is done on a phone, a Pi, desktop computer, or what have you.

    The receiving interface doesn't have to be complex, but it really should include things like an audio isolation transformer. It's more than just a wire from the radio to the computer.

    If the radio system only transmits Morse (or other OOK mode) automatically, and receiving is done only by ear, then you don't need a receive interface, but I don't think that is what you are proposing.
     
  10. N4VDI

    N4VDI Ham Member QRZ Page

    The major advantage of encoding FEC via subtle timing-jitter tweaks is that it would have zero impact on occupied bandwidth or transmission time, and a human who isn't interested interested in using it wouldn't have it "in his/her face" annoying them. It would be kind of like using invisible watermarks in images to embed metadata without defacing the image itself.

    In contrast, appending strings of characters to the end would increase transmission time (possibly substantially) and annoy humans who'd still have to semi pay attention to it, even if they didn't try hard to accurately copy it.
     

Share This Page