Saturday, April 1, 2017

President Trump's "America First Energy Plan" Secrets Leaked: Quake Field Generator

April 1st, 2017 Lexington, Massachusetts

As President Trump has stated publicly many times, a sound energy policy begins with the recognition that we have vast untapped domestic energy reserves right here in America. Unfortunately, the secret details behind the ambitious America First Energy Plan were leaked late last night.  

To pre-empt any fake news by the Liberal Media I am making a full disclosure of the secret project I have been working on the last 18 months in propinquity of MIT Lincoln Laboratory, a federally funded research and development center chartered to apply advanced technology to problems of national security. 

I am unveiling a breakthrough technology that will lower energy costs for hardworking Americans and maximize the use of American resources, freeing us from dependence on foreign oil. This technology allows harvesting clean energy from around the world and making other nations to pay for it according to President Trump's master plan.  

The technology is based on quake fields and provides virtually unlimited free energy, while protecting clean air and clean water, conserving our natural habitats, and preserving our natural reserves and resources. 

What is Quake Field?

Quake field theory is relatively unknown part of seismology. Seismology is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies. The field also includes studies of earthquake environmental effects such as tsunamis as well as diverse seismic sources such as volcanic, tectonic, oceanic, atmospheric, and artificial processes such as explosions.  

Quake field theory was formulated by Dr. James von Hausen in 1945 as part of the Manhattan project during World War II. Quake field theory provides a mathematical model how energy propagates through elastic waves. During the development of the first nuclear weapons scientists faced a big problem: nobody was able to provide an accurate estimate of the energy yield of the first atom bomb. People were concerned possible side effects and there was speculation that fission reaction could ignite the Earth atmosphere. 

Quake field theory provides precise field formulas to calculate energy propagation in planet-like bodies. The theory has been proven in hundreds of nuclear weapon tests during the Cold War period. However, most of the empirical research and scientific papers have been classified by the U.S. Government  and therefore you cannot really find details in Wikipedia or other public sources due to the sensitivity of the information.

In the recent years U.S. seismologists have started to use quake field theory to calculate the amount of energy released in earthquakes. This work was enabled by creation of global network of seismic sensors that is now available. These sensors provide real time information on earthquakes over the Internet. 

I have a Raspberry Shake at home. This is a Raspberry Pi powered device to monitor quake field activity and part of a global seismic sensor network.  Figure 1 show quake field activity on March 25, 2017. As you can see it was a very active day. This system gives me a prediction when the quake field is activated. 

Figure 1. Quake Field activity in Lexington, MA

How much energy is available from Quake Field?

A single magnitude 9 earthquake  releases approximately 3.9 e+22 Joules of seismic moment energy (Mo).  Much of this energy gets dissipated at the epicenter but  approximately 1.99 e+18 Joules is radiated as seismic waves through the planet. To put this in perspective you could power the whole United States for 7.1 days with this radiated energy. This radiated energy equals to 15,115 million gallons of gasoline -  just from a single large earthquake. 

The radiated energy is released as waves from the epicenter of a major earthquake and propagate outward as surface waves (S waves). In the case of compressional waves (P waves), the energy radiates from the focus under the epicenter and travels all the way through the globe. Figure 2 illustrates these two primary energy transfer mechanisms.  Note that we don’t need to build any transmission network to transfer this energy so the capital cost would be very small.  

Figure 2. Energy Transfer by Radiated Waves

Magnitude 2 and smaller earthquakes occur several hundred times a day world wide. Major earthquakes, greater than magnitude 7, happen more than once per month. “Great earthquakes”, magnitude 8 and higher, occur about once a year.

The real challenge has been that we don’t have a technology harvest this huge untapped energy - until today.  

Introducing Quake Field Generator

The following introduction explains the operating principles of quake field generator (QFG) technology.

Using the quake field theory and the seismic sensor data it is now possible to predict accurately when the S and P waves arrive to any location on Earth.  The big problem has been to find efficient method how to convert the energy of these waves to electricity. 

A triboelectric nanogenerator (TENG) is an energy harvesting device that converts the external mechanical energy into electricity by a conjunction of triboelectric effect and electrostatic induction.

Ever since the first report of the TENG in January 2012, the output power density of TENG has been improved for five orders of magnitude within 12 months. The area power density reaches 313 W/m2, volume density reaches 490 kW/m3, and a conversion efficiency of ~60% has been demonstrated. Besides the unprecedented output performance, this new energy technology also has a number of other advantages, such as low cost in manufacturing and fabrication, excellent robustness and reliability, environmental-friendly, and so on.

The Liberal Media outlets have totally misunderstood the "clean coal technology” that is the cornerstone of President Trump's master plan for energy independence.  Graphene is coal, just in different molecular configuration. Graphene is one of materials exhibiting strong triboelectric effect. With recent advances in 3D printing technology it is now feasible to mass produce low cost triboelectric nanogenerators. Graphene is now commercially available for most 3D printers.

The geometry of Quake Field Generator is based on fractals, minimizing the size of resonant transducer. My prototype consists of 10,000 TENG elements organized into a fractal shape. In this prototype version that I have been working on the last 18 months I have also implemented an automated tuning circuit that uses flux capacitors to maximize the energy capture at the resonance frequency.  This brings the efficiency of the QFG to 97.8% - I am quite pleased with this latest design.

Figure 3. show my current Quake Field Generator prototype - this is a 10 kW version. It has four stacks of TENG elements. Due to the high efficiency of these elements the ventilation need is quite minimal.

Figure 3. Quake Field Generator prototype - 10 kW version

So what does this news mean to an average American?

Quake Field Generator will be fully open source technology that will create millions of new jobs in the U.S. energy market.  It leverages our domestic coal sources to build TENG devices from graphene (aka “clean coal”).  

A simple  10 kW generator can be 3D printed in one day and it can be mounted next to your power distribution panel at your home. The only requirements are that the unit must have connection to ground to harvest the quake field energy and you need to use a professional electrician to make a connection to your home circuit. 

I have been running such a DYI 10 kW generator for over a year. So far I have been very happy with the performance of this Quake Field Generator.  Once I finalize the design my plan is to publish the software, circuit design, transducer STL files etc. on Github.

Let me know if you are interested in QFG technology - happy April 1st.  



Sunday, January 29, 2017

Amazon Echo - Alexa skills for ham radio

Demo video showing a proof of concept Alexa DX Cluster skill with remote control of Elecraft KX3 radio. 


According to a Wikipedia article Amazon Echo is a smart speaker developed by Amazon. The device consists of a 9.25-inch (23.5 cm) tall cylinder speaker with a seven-piece microphone array. The device connects to the voice-controlled intelligent personal assistant service Alexa, which responds to the name "Alexa".  The device is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic and other real time information. It can also control several smart devices using itself as a home automation hub.

Echo also has access to skills built with the Alexa Skills Kit. These are 3rd-party developed voice experiences that add to the capabilities of any Alexa-enabled device (such as the Echo). Examples of skills include the ability to play music, answer general questions, set an alarm, order a pizza, get an Uber, and more. Skills are continuously being added to increase the capabilities available to the user.

The Alexa Skills Kit is a collection of self-service APIs, tools, documentation and code samples that make it fast and easy for any developer to add skills to Alexa. Developers can also use the "Smart Home Skill API", a new addition to the Alexa Skills Kit, to easily teach Alexa how to control cloud-controlled lighting and thermostat devices. A developer can follow tutorials to learn how to quickly build voice experiences for their new and existing applications.

Ham Radio Use Cases 

For ham radio purposes Amazon Echo and Alexa service creates a whole new set of opportunities to automate your station and build new audio experiences.

Here is a list of ideas what you could use Amazon echo for:

- listen ARRL Podcasts
- practice Morse code or ham radio examination
- check space weather and radio propagation forecasts
- memorize  Q codes  (QSL, QTH, etc.)
- check call sign details from
- use APRS to locate a mobile ham radio station

I started experimenting with Alexa Skills APIs using mostly Python to create programs.  One of the ideas I had was to get Alexa to control my Elecraft KX3 radio remotely.  To make the skill more useful I build some software to pull latest list of spots from DX Cluster and use those to set the radio on the spotted frequency to listen some new station or country on my bucket list.

Alexa Skill Description

Imagine if you could use and listen your radio station anywhere just by saying the magic words "Alexa, ask DX Cluster to list spots."

Alexa would then go to a DX Cluster, find the latest spots on SSB  (or CW) and allows you to select the spot you want to follow.  By just saying "Select seven"  Alexa would set your radio to that frequency and start playing the audio.

Figure 2.  Alexa DX Cluster Skill output 

System Architecture 

Figure 3. below shows all the main components of this solution.  I have a Thinkpad X301 laptop connected to Elecraft KX3 radio with KXUSB serial port and using built-in audio interface.  X301 is running several processes: one for recording the audio into MP3 files,  hamlib rigctld to control the the radio and a web server that allows Alexa skill to control the frequency and retrieve the recorded MP3 files.

I implemented the Alexa Skill "DX Cluster" using Amazon Web Services Cloud.  Main services are AWS Gateway and AWS Lambda.

The simplified sequence of events is shown in the figure below:

1.  User says  "Alexa, ask DX Cluster to list spots".  Amazon Echo device sends the voice file to Amazon Alexa service that does the voice recognition.

2. Amazon Alexa determines that the skill is "DX Cluster" and sends JSON formatted request to configured endpoint in AWS Gateway.

3.  AWS Gateway sends the request to AWS Lambda that loads my Python software.

4.  My  "DX Cluster" software parses the JSON request, calls  "ListIntent" handler.  If not already loaded, it will make a web API request to pull the latest DX cluster data from The software will the convert the text to SSML format for speech output and returns the list of spots  to Amazon Echo device.

5.   If user says  "Select One"  (the top one on the list), then the frequency of the selected spot is sent to the webserver running on X301 laptop.  It will change the radio frequency using rigctl command and then return the URL to the latest MP3 that is recorded. This URL is passed to Amazon Echo device to start the playback.

6. Amazon Echo device will retrieve the MP3 file from the X301 web server and starts playing.

Figure 3.  System Architecture


As this is just a proof of concept the software is still very fragile and not ready for publishing.  The software is written in Python language and is heavily using open source components, such as 

  • hamblib   - for controlling the Elecraft KX3 radio
  • rotter       - for recording MP3 files from the radio 
  • Flask       - Python web framework 
  • Boto3      - AWS Python libraries
  • Zappa      - serverless Python services

Once the software is a bit more mature I could post it on Github if there is any interest from the ham radio community for this.  

Mauri AG1LE 

Saturday, February 6, 2016

KX3 Remote Control and audio streaming with Raspberry Pi 2


I wanted to control my Elecraft KX3 transceiver remotely using my Android Phone.  A quick Internet search yielded this site by  Andrea IU4APC.  His KX3 companion application on Android allows remote control using Raspberry Pi 2 and he has also links to an audio streaming application called Mumble.

I did a quick ham shack inventory of hardware and software and realized that I had already everything required for this project.

A short video how this works is in YouTube:

KX3, Raspberry Pi2 and Android Phone connected together over Wifi.


Elecraft KX3
Elecraft KXUSB Serial Cable for KX3
Raspberry Pi 2 with Raspbian Linux. I have 32 GB SD memory card, 8 GB should also work.
Behringer UCA202 USB Audio Interface  and audio cables
Android Phone  (I have OnePlus One)


Following the instructions I plugged the KXUSB Serial cable to the KX3 ACC1 port and to one of the two Raspberry Pi USB ports.

I installed ser2net with following commands on command line:

sudo apt-get update 
sudo apt-get install ser2net 

then I edited the /etc/ser2net.conf file:

sudo nano /etc/ser2net.conf 

and added the following line:

 7777:raw:0:/dev/ttyUSB0:38400 8DATABITS NONE 1STOPBIT

and saved the file by pressing CTRL+X and then Y

I executed the ser2net:

sudo /etc/init.d/ser2net restart 

Once done with the host I downloaded the KX3 Companion app (link here) on my Android phone and opened the app.

To enable the KX3 Remote functionality you have to edit 3 options (“Remote Settings” section). Check the “Use KX3Remote/Piglet/Pigremote” option


Set your PC/Raspberry Pi IP address in the “KX3Remote/Piglet/Pigremote IP” option.  This below assumes that your RPI and Android phone are connected to the same Wifi network.

In my case RPI is using WLAN0 interface connected to WiFi router and IP address is  This address depends on your local network configuration and you can get the Raspberry Pi IP address using command

ip addr show 

Set the choosen Port number (7777) on the PC/Raspberry Pi IP address in the “KX3Remote/Piglet/Pigremote Port” option

Now you can test the connection. By tapping "ON" button on the left top corner you can see if the connection was successful. A message "Connected to Piglet/Pigremote" should show up at the bottom - see below:

If you are having problems with this, here are some troubleshooting ideas

  • check the Raspberry Pi IP address again
  • check that Raspberry Pi and Android Phone are on the same Wifi network
  • check that your KX3 serial port is set to 38400 bauds (this is the default in KX3 Companion App) 
If everything works, you should be able to change the frequency and the bands on KX3 by tapping  Band+/Band- and Freq+/Freq- buttons on the app. Current KX3 frequency will be updated on FREQUENCY field between buttons as you turn the VFO on KX3.


Plug in USB Audio Interface to Raspberry Pi 2 USB port. In my case I used Behringer UCA202 but there are many other alternatives available.

The audio server is called Mumble. This is a low latency Voice over IP (VoIP) server designed for gaming community but it works well for streaming audio from  KX3 to Android Phone and back. There is a great page that describes installation in more details.

I used the following commands to install mumble VoIP server

   sudo apt-get install mumble-server
   sudo dpkg-reconfigure mumble-server

This last command will present you with a few options, set these however you would like mumble to operate.

  • Autostart: I selected Yes 
  • High Priority: I selected Yes (This ensures Mumble will always be given top priority even when the Pi is under a lot of stress) 
  • SuperUser: Set the password here. This account will have full control over the server.

You need to know your IP address on Raspberry Pi 2 when configuring the Mumble client.  Write it down as you will need it shortly. In my case it was

ip addr show

You may want to edit the server configuration file. I didn't do any changes but the installation page recommends changing welcome text and server password. You can do it using this command:

sudo nano /etc/mumble-server.ini

Finally, you need to restart the server:

sudo /etc/init.d/mumble-server restart

Now that we have the mumble server running we need to install the Mumble client on Raspberry Pi 2. This can be done with this command:

sudo apt-get install mumble

Next you start the client application by typing:


This starts the mumble client. First you need to go through some configuration windows.

You need to have USB audio interface input connected to KX3 Phones output when going though the Mumble Audio Wizard. I turned the audio volume to approximately 30.

You need to select the USB Audio device as the input device. Default device is "Default ALSA device" that is onboard audio chip. When clicking Device drop down list select SysDefault card - USB Audio Codec as shown on picture below.

The drop down list might be different depending on your hardware configuration. Select the SysDefault USB device.

Once the Input and Output devices have been selected you can move forward with Next.

Next comes device tuning. I selected the longest delay for best sound quality.

Next comes Volume tuning. Make sure that KX3 audio volume is at least 30. You should see blue bar moving in sync with KX3 audio. Follow instructions.

Next comes voice activity detection setting. Follow instructions.

Next comes quality selection. I selected high as I am testing this in local LAN network.

Audio settings are now completed.

Next comes server connect. You can "Add New..." by giving the IP address that you wrote down earlier. I gave the server label "raspberrypi" and username "pi".You don't have the change the port.

When you connect to the server you should have a view like this below.

Next step is then download mumble client on the Android phone and configure it.


I downloaded free mumble client called Plumble on my Android phone. You need to configure the Mumble server running on Raspberry Pi 2 on the software. Once you open Plumble client tap the "+" sign on right top corner.

I gave the label "KX3" and IP address of the Mumble server running on Raspberry Pi 2  - in my case the IP address is  For username I selected my ham radio call sign.

Since I did not configure any passwords on my server I left that field empty. Once the server has been added, you can try to connect to it.


If everything has gone well you should be able to connect to the Mumble VoIP server and hear a sound from your mobile phone.

On Raspberry Pi 2 you should see that another client "AG1LE"  has connected to the server. See example below: 


If you want to extend from just listening KX3 to actually working remotely you need to configure your Wifi router to enable connection remotely over the Internet. Also, the USB audio interface need to be connected to the microphone (MIC) input of KX3 radio.  KX3 must have VOX turned on to enable audio transmit.

Documenting these steps will take a bit more time, so I leave it for the next session.

 Did you find these instructions useful?  Any comments or feedback? 

Mauri AG1LE

Sunday, December 27, 2015

TensorFlow: a new LSTM RNN based Morse decoder


In my previous post I created an experiment to train a LSTM Recurrent neural network (RNN) to detect symbols from noisy Morse code. I continued experiments, but this time I used the new TensorFlow open source library for machine intelligence. The flexible architecture of TensorFlow allows to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.


I started with the TensorFlow MNIST example authored by Aymeric Damien. MNIST is a large database of handwritten digits that is commonly used for machine learning experiments and algorithm development. Instead of training a LSTM RNN model using handwritten characters I created a Python script to generate a lot of Morse code training material. I downloaded ARRL Morse training text files and created a large text file. From this text file the Python script generates properly formatted training vectors, over 155,000 of them.  The software is available as Python inotebook format in Github.

The LSTM RNN model has the following parameters:

# Parameters
learning_rate = 0.001 
training_iters = 114000 
batch_size = 126

# Network Parameters
n_input = 1 # each Morse element is normalized to dit length 1 
n_steps = 32 # timesteps (training material padded to 32 dit length)
n_hidden = 128 # hidden layer num of features 
n_classes = 60 # Morse character set 

The training takes approximately 15 minutes on my Thinkpad X301 laptop. The progress of loss function and accuracy % over the training is depicted in Figure 1 below. The final accuracy was 93.6% after 114,000 training samples.

Figure 1.  Training progress over time

I was testing the model with generated data while adding noise gradually to signals using the "sigma" parameter on the Python scripts.  The results are below:

As can be seen above at "sigma" level 0.2 the decoder starts to make a lot of errors.


The software learns the Morse code by going through the training vectors multiple times. By going through 114,000 characters in training the model achieves 96.3% accuracy. I did not try to optimize anything and I just used the reference material that came with TensorFlow library. This experiment shows that it is possible to build an intelligent Morse decoder that learns the patterns from the data and also allows to scale up more complex models with better accuracy and better tolerance for QSB and noisy signals.

TensorFlow proved to be a very powerful new machine learning library that was relatively easy to use. The biggest challenge was to figure out what data formats to use with various API calls. Due to the complexity and richness of the TensorFlow library I am fairly sure that much can be done to improve the efficiency of this software. As TensorFlow has been designed so that it works on a desktop, server, tablet or even on a mobile phone this open new possibilities to build an intelligent, learning Morse decoder for different platforms.

 73 Mauri AG1LE

Tuesday, November 24, 2015

Experiment: Deep Learning algorithm for Morse decoder using LSTM RNN


In my previous post I created a Python script to generate training material for neural networks.
The goal is to test how well the modern Deep Learning algorithms would work in decoding noisy Morse signals with heavy QSB fading.

I did some research on various frameworks and found this article  from Daniel Hnyk. My requirements were quite similar - full Python support, LSTM RNN built-in and a simple interface.
He had selected Keras that is available in Github. There is a mailing list for Keras users that is fairly active and quite useful to find support from other users. I installed Keras on my Linux laptop and using Jupyter interactive notebooks it was easy to start experimenting with various neural network configurations.


Using various sources and above mailing list I came up with the following experiment. I have uploaded the Jupyter notebook file in Github in case the reader wants to replicate the experiment.

The source code or printed output text is shown below with courier font  and I have added some commentary as well as the graphs as pictures.

In [12]:
#!/usr/bin/env python
#  - Morse Encoder to generate training material for neural networks
# Generates raw signal waveforms with Gaussian noise and QSB (signal fading) effects
# Provides also the training target variables in separate columns. Example usage:
# WPM= 40 # speed 40 words per minute
# Tq = 4. # QSB cycle time in seconds (typically 5..10 secs)
# sigma = 0.02 # add some Gaussian noise
# from matplotlib.pyplot import  plot,show,figure,legend
# from numpy.random import normal
# figure(figsize=(12,3))
# lb1,=plot(P.t,P.sig,'b',label="sig")
# lb2,=plot(P.t,P.dit,'g',label="dit")
# lb3,=plot(P.t,P.dah,'g',label="dah")
# lb4,=plot(P.t,P.ele,'m',label="ele")
# lb5,=plot(P.t,P.chr,'c',label="chr")
# lb6,=plot(P.t,P.wrd,'r*',label="wrd")
# legend([lb1,lb2,lb3,lb4,lb5,lb6])
# show()
# P.to_csv("MorseTest.csv")
# Copyright (C) 2015   Mauri Niininen, AG1LE
# is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with  If not, see <>.

import numpy as np
import pandas as pd
from numpy import sin,pi
from numpy.random import normal
pd.options.mode.chained_assignment = None  #to prevent warning messages

Morsecode = {
 '!': '-.-.--',
 '$': '...-..-',
 "'": '.----.',
 '(': '-.--.',
 ')': '-.--.-',
 ',': '--..--',
 '-': '-....-',
 '.': '.-.-.-',
 '/': '-..-.',
 '0': '-----',
 '1': '.----',
 '2': '..---',
 '3': '...--',
 '4': '....-',
 '5': '.....',
 '6': '-....',
 '7': '--...',
 '8': '---..',
 '9': '----.',
 ':': '---...',
 ';': '-.-.-.',
 '<AR>': '.-.-.',
 '<AS>': '.-...',
 '<HM>': '....--',
 '<INT>': '..-.-',
 '<SK>': '...-.-',
 '<VE>': '...-.',
 '=': '-...-',
 '?': '..--..',
 '@': '.--.-.',
 'A': '.-',
 'B': '-...',
 'C': '-.-.',
 'D': '-..',
 'E': '.',
 'F': '..-.',
 'G': '--.',
 'H': '....',
 'I': '..',
 'J': '.---',
 'K': '-.-',
 'L': '.-..',
 'M': '--',
 'N': '-.',
 'O': '---',
 'P': '.--.',
 'Q': '--.-',
 'R': '.-.',
 'S': '...',
 'T': '-',
 'U': '..-',
 'V': '...-',
 'W': '.--',
 'X': '-..-',
 'Y': '-.--',
 'Z': '--..',
 '\\': '.-..-.',
 '_': '..--.-',
 '~': '.-.-'}

def encode_morse(cws):
    for chr in cws:
        try: # try to find CW sequence from Codebook
            s += Morsecode[chr]
            s += ' '
            if chr == ' ':
                s += '_'
            print "error: '%s' not in Codebook" % chr
    return ''.join(s)

def len_dits(cws):
    # length of string in dit units, include spaces
    val = 0
    for ch in cws:
        if ch == '.': # dit len + el space 
            val += 2
        if ch == '-': # dah len + el space
            val += 4
        if ch==' ':   #  el space
            val += 2
        if ch=='_':   #  el space
            val += 7
    return val

def signal(cw_str,WPM,Tq,sigma):
    # for given CW string i.e. 'ABC ' 
    # return a pandas dataframe with signals and  symbol probabilities
    # WPM = Morse speed in Words Per Minute (typically 5...50)
    # Tq  = QSB cycle time (typically 3...10 seconds) 
    # sigma = adds gaussian noise with standard deviation of sigma to signal
    cws = encode_morse(cw_str)
    #print cws
    # calculate how many milliseconds this string will take at speed WPM
    ditlen = 1200/WPM # dit length in msec, given WPM
    msec = ditlen*(len_dits(cws)+7)  # reserve +7 for the last pause
    t = np.arange(msec)/ 1000.       # time array in seconds
    ix = range(0,msec)               # index for arrays

    # Create a DataFrame and initialize
    col =["t","sig","dit","dah","ele","chr","wrd","spd"]
    P = pd.DataFrame(index=ix,columns=col)
    P.t = t              # keep time  
    P.sig=np.zeros(msec) # signal stored here
    P.dit=np.zeros(msec) # probability of 'dit' stored here
    P.dah=np.zeros(msec) # probability of 'dah' stored here
    P.ele=np.zeros(msec) # probability of 'element space' stored here
    P.chr=np.zeros(msec) # probability of 'character space' stored here
    P.wrd=np.zeros(msec) # probability of 'word space' stored here
    P.spd=np.ones(msec)*WPM #speed stored here 

    #pre-made arrays with multiple(s) of ditlen
    z = np.zeros(ditlen) 
    z2 = np.zeros(2*ditlen)
    z4 = np.zeros(4*ditlen)
    dit = np.ones(ditlen)
    dah = np.ones(3*ditlen)
    # For all dits/dahs in CW string generate the signal, update symbol probabilities
    i = 0
    for ch in cws:
        if ch == '.':
            dur = len(dit)
            P.sig[i:i+dur] = dit
            P.dit[i:i+dur] = dit
            i += dur
            P.sig[i:i+dur] = z
            P.ele[i:i+dur] = np.ones(dur)
            i += dur

        if ch == '-':
            dur = len(dah)
            P.sig[i:i+dur] = dah
            P.dah[i:i+dur]=  dah
            i += dur            
            P.sig[i:i+dur] = z
            P.ele[i:i+dur] = np.ones(dur)
            i += dur

        if ch == ' ':
            dur = len(z2)
            P.sig[i:i+dur] = z2
            P.chr[i:i+dur]=  np.ones(dur)
            i += dur
        if ch == '_':
            dur = len(z4)
            P.sig[i:i+dur] = z4
            P.wrd[i:i+dur]=  np.ones(dur)
            i += dur
    if Tq > 0.:  # QSB cycle time impacts signal amplitude
        qsb = 0.5 * sin((1./float(Tq))*t*2*pi) +0.55
        P.sig = qsb*P.sig
    if sigma >0.:
        P.sig += normal(0,sigma,len(P.sig))
    return P
In [13]:
print ('MorseEncoder started')
%matplotlib inline
from matplotlib.pyplot import  plot,show,figure,legend, title
from numpy.random import normal
WPM= 40
Tq = 1.8 # QSB cycle time in seconds (typically 5..10 secs)
sigma = 0.01 # add some Gaussian noise
P = signal('QUICK',WPM,Tq,sigma)
title("QUICK in Morse code - (c) 2015 AG1LE")
print ('MorseEncoder finished. %d datapoints created' % len(P.sig)) 

MorseEncoder started

The Jupyter notebook will plot this graph that basically shows the text 'QUICK' converted to noisy signal with strong QSB fading.  This signal goes down close to zero between letters C and K as you can see below.  

Figure 1.  The training signal containing noise and QSB fading
The next  section of the code imports some libraries (including Keras) that is used for Neural Network experimentation. I am also preparing the data to the proper format that Keras requires. 

MorseEncoder finished. 1950 datapoints created
In [14]:
# Time Series Testing - Morse case
import keras.callbacks
from keras.models import Sequential  
from keras.layers.core import Dense, Activation, Dense, Dropout
from keras.layers.recurrent import LSTM

import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

# Data preparation 
# use 100 examples of data to predict nb_samples (850) in the future
samples = 1950
examples = 1000
y_examples = 100

x = np.linspace(0,1950,samples)
nb_samples = samples - examples - y_examples
data = P.sig

# prepare input for RNN training  - 1 feature
input_list = [np.expand_dims(np.atleast_2d(data[i:examples+i]), axis=0) for i in xrange(nb_samples)]
input_mat = np.concatenate(input_list, axis=0)
title("training input and target data")
<matplotlib.text.Text at 0x10c119b50>

This graph shows the training data (the noisy, fading signal) and the target data (I selected 'dits' in this example). This is just to verify that I have the right datasets selected. 

Figure 2.  Training and target data 

In the following sections I prepare the training target ('dits') to proper format and setup the neural network model.  I am using LSTM with Dropout and the model has 300 hidden neurons.  I have also a callback function defined to capture the loss data during the training so that I can plot the loss curve to see the training progress.  

In [15]:
# prepare target - the first column in merged dataframe
ydata = P.dit
target_list = [np.atleast_2d(ydata[i+examples:examples+i+y_examples]) for i in xrange(nb_samples)]
target_mat = np.concatenate(target_list, axis=0)

# set up a model
trials = input_mat.shape[0]
features = input_mat.shape[2]
hidden = 300

model = Sequential()
model.add(LSTM(input_dim=features, output_dim=hidden,return_sequences=False))
model.add(Dense(input_dim=hidden, output_dim=y_examples))
model.compile(loss='mse', optimizer='rmsprop')

# Call back to capture losses 
class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.losses = []

    def on_batch_end(self, batch, logs={}):
# Train the model
history = LossHistory(), target_mat, nb_epoch=100,callbacks=[history])

# Plot the loss curve 
plt.plot( history.losses)
title("training loss")

Here I have started the training. I selected 100 epochs - this means that the software will go through the training material  for 100 times during the training.  As you can see this goes very quickly - with larger model or larger datasets the training might take minutes to hours per epoch. We have a very small model and small dataset here. 

Epoch 1/100
850/850 [==============================] - 0s - loss: 0.1050     
Epoch 2/100
850/850 [==============================] - 0s - loss: 0.0927     
Epoch 3/100
850/850 [==============================] - 0s - loss: 0.0870     
Epoch 4/100
850/850 [==============================] - 0s - loss: 0.0823     
Epoch 5/100
850/850 [==============================] - 0s - loss: 0.0788     
Epoch 6/100
850/850 [==============================] - 0s - loss: 0.0756     
Epoch 7/100
850/850 [==============================] - 0s - loss: 0.0724     
Epoch 8/100
850/850 [==============================] - 0s - loss: 0.0693     
Epoch 9/100
850/850 [==============================] - 0s - loss: 0.0668     
Epoch 10/100
850/850 [==============================] - 0s - loss: 0.0639     
Epoch 11/100
850/850 [==============================] - 0s - loss: 0.0611     
Epoch 12/100
850/850 [==============================] - 0s - loss: 0.0586     
Epoch 13/100
850/850 [==============================] - 0s - loss: 0.0561     
Epoch 14/100
850/850 [==============================] - 0s - loss: 0.0539     
Epoch 15/100
850/850 [==============================] - 0s - loss: 0.0519     
Epoch 16/100
850/850 [==============================] - 0s - loss: 0.0495     
Epoch 17/100
850/850 [==============================] - 0s - loss: 0.0476     
Epoch 18/100
850/850 [==============================] - 0s - loss: 0.0456     
Epoch 19/100
850/850 [==============================] - 0s - loss: 0.0441     
Epoch 20/100
850/850 [==============================] - 0s - loss: 0.0430     
Epoch 21/100
850/850 [==============================] - 0s - loss: 0.0411     
Epoch 22/100
850/850 [==============================] - 0s - loss: 0.0400     
Epoch 23/100
850/850 [==============================] - 0s - loss: 0.0387     
Epoch 24/100
850/850 [==============================] - 0s - loss: 0.0378     
Epoch 25/100
850/850 [==============================] - 0s - loss: 0.0370     
Epoch 26/100
850/850 [==============================] - 0s - loss: 0.0356     
Epoch 27/100
850/850 [==============================] - 0s - loss: 0.0350     
Epoch 28/100
850/850 [==============================] - 0s - loss: 0.0340     
Epoch 29/100
850/850 [==============================] - 0s - loss: 0.0334     
Epoch 30/100
850/850 [==============================] - 0s - loss: 0.0328     
Epoch 31/100
850/850 [==============================] - 0s - loss: 0.0322     
Epoch 32/100
850/850 [==============================] - 0s - loss: 0.0317     
Epoch 33/100
850/850 [==============================] - 0s - loss: 0.0309     
Epoch 34/100
850/850 [==============================] - 0s - loss: 0.0302     
Epoch 35/100
850/850 [==============================] - 0s - loss: 0.0299     
Epoch 36/100
850/850 [==============================] - 0s - loss: 0.0296     
Epoch 37/100
850/850 [==============================] - 0s - loss: 0.0290     
Epoch 38/100
850/850 [==============================] - 0s - loss: 0.0285     
Epoch 39/100
850/850 [==============================] - 0s - loss: 0.0283     
Epoch 40/100
850/850 [==============================] - 0s - loss: 0.0277     
Epoch 41/100
850/850 [==============================] - 0s - loss: 0.0272     
Epoch 42/100
850/850 [==============================] - 0s - loss: 0.0268     
Epoch 43/100
850/850 [==============================] - 0s - loss: 0.0265     
Epoch 44/100
850/850 [==============================] - 0s - loss: 0.0258     
Epoch 45/100
850/850 [==============================] - 0s - loss: 0.0256     
Epoch 46/100
850/850 [==============================] - 0s - loss: 0.0253     
Epoch 47/100
850/850 [==============================] - 0s - loss: 0.0251     
Epoch 48/100
850/850 [==============================] - 0s - loss: 0.0248     
Epoch 49/100
850/850 [==============================] - 0s - loss: 0.0246     
Epoch 50/100
850/850 [==============================] - 0s - loss: 0.0241     
Epoch 51/100
850/850 [==============================] - 0s - loss: 0.0236     
Epoch 52/100
850/850 [==============================] - 0s - loss: 0.0233     
Epoch 53/100
850/850 [==============================] - 0s - loss: 0.0234     
Epoch 54/100
850/850 [==============================] - 0s - loss: 0.0230     
Epoch 55/100
850/850 [==============================] - 0s - loss: 0.0229     
Epoch 56/100
850/850 [==============================] - 0s - loss: 0.0224     
Epoch 57/100
850/850 [==============================] - 0s - loss: 0.0223     
Epoch 58/100
850/850 [==============================] - 0s - loss: 0.0218     
Epoch 59/100
850/850 [==============================] - 0s - loss: 0.0218     
Epoch 60/100
850/850 [==============================] - 0s - loss: 0.0215     
Epoch 61/100
850/850 [==============================] - 0s - loss: 0.0215     
Epoch 62/100
850/850 [==============================] - 0s - loss: 0.0212     
Epoch 63/100
850/850 [==============================] - 0s - loss: 0.0208     
Epoch 64/100
850/850 [==============================] - 0s - loss: 0.0209     
Epoch 65/100
850/850 [==============================] - 0s - loss: 0.0207     
Epoch 66/100
850/850 [==============================] - 0s - loss: 0.0205     
Epoch 67/100
850/850 [==============================] - 0s - loss: 0.0203     
Epoch 68/100
850/850 [==============================] - 0s - loss: 0.0200     
Epoch 69/100
850/850 [==============================] - 0s - loss: 0.0200     
Epoch 70/100
850/850 [==============================] - 0s - loss: 0.0197     
Epoch 71/100
850/850 [==============================] - 0s - loss: 0.0197     
Epoch 72/100
850/850 [==============================] - 0s - loss: 0.0198     
Epoch 73/100
850/850 [==============================] - 0s - loss: 0.0193     
Epoch 74/100
850/850 [==============================] - 0s - loss: 0.0191     
Epoch 75/100
850/850 [==============================] - 0s - loss: 0.0189     
Epoch 76/100
850/850 [==============================] - 0s - loss: 0.0188     
Epoch 77/100
850/850 [==============================] - 0s - loss: 0.0189     
Epoch 78/100
850/850 [==============================] - 0s - loss: 0.0185     
Epoch 79/100
850/850 [==============================] - 0s - loss: 0.0185     
Epoch 80/100
850/850 [==============================] - 0s - loss: 0.0184     
Epoch 81/100
850/850 [==============================] - 0s - loss: 0.0183     
Epoch 82/100
850/850 [==============================] - 0s - loss: 0.0181     
Epoch 83/100
850/850 [==============================] - 0s - loss: 0.0180     
Epoch 84/100
850/850 [==============================] - 0s - loss: 0.0179     
Epoch 85/100
850/850 [==============================] - 0s - loss: 0.0177     
Epoch 86/100
850/850 [==============================] - 0s - loss: 0.0177     
Epoch 87/100
850/850 [==============================] - 0s - loss: 0.0174     
Epoch 88/100
850/850 [==============================] - 0s - loss: 0.0177     
Epoch 89/100
850/850 [==============================] - 0s - loss: 0.0175     
Epoch 90/100
850/850 [==============================] - 0s - loss: 0.0173     
Epoch 91/100
850/850 [==============================] - 0s - loss: 0.0172     
Epoch 92/100
850/850 [==============================] - 0s - loss: 0.0171     
Epoch 93/100
850/850 [==============================] - 0s - loss: 0.0171     
Epoch 94/100
850/850 [==============================] - 0s - loss: 0.0167     
Epoch 95/100
850/850 [==============================] - 0s - loss: 0.0167     
Epoch 96/100
850/850 [==============================] - 0s - loss: 0.0170     
Epoch 97/100
850/850 [==============================] - 0s - loss: 0.0164     
Epoch 98/100
850/850 [==============================] - 0s - loss: 0.0166     
Epoch 99/100
850/850 [==============================] - 0s - loss: 0.0163     
Epoch 100/100
850/850 [==============================] - 0s - loss: 0.0164     
<matplotlib.text.Text at 0x11e055350>

The following graph shows the training loss during the training process. This gives you an idea whether the training is progressing well or if you have some problem with the model or the parameters. 
Figure 3.  Training loss curve

In [16]:
# Use training data to check prediction
predicted = model.predict(input_mat)
In [17]:
# Plot original data (green) and predicted data (red)
lb3,=plot(xrange(examples,examples+nb_samples), predicted[:,1],'r',label="predicted")
title("training vs. predicted")
<matplotlib.text.Text at 0x11f164610>

In this section I am checking the model prediction. Since I am using the training material this is supposed to show a good result if the training was successful.  As you can see from figure 4. below the predicted graph (red color)  is aligned with 'dits' in the training signal (green color) despite QSB fading and noise in the signal.  
Figure 4.  Training vs. predicted graph

In the following section I will create another Morse signal, this time with text 'KCIUQ' but using the same noise, QSB and speed parameters.  I am planning to use this signal to validate how well the model has generalized the 'dit' concept.  

In [18]:
# Let's change the input signal, instead of QUICK we have KCIUQ in Morse code 
P = signal('KCIUQ',WPM,Tq,sigma)
data = P.sig

# prepare input - 1 feature
input_list = [np.expand_dims(np.atleast_2d(data[i:examples+i]), axis=0) for i in xrange(nb_samples)]
input_mat = np.concatenate(input_list, axis=0)
[<matplotlib.lines.Line2D at 0x136050f90>]

Here is the generated validation Morse signal.  It has the same letter as before but in reverse order. Can you read letters 'KCIUQ' from the graph below?

Figure 5.  Validation Morse signal

In this section I use the above validation signal to create a prediction and the plot the results.  

In [19]:
predicted = model.predict(input_mat)
plt.plot(xrange(examples,examples+nb_samples), predicted[:,1],'r')
[<matplotlib.lines.Line2D at 0x1217be9d0>]

As you can see from the graph below the predicted 'dit' symbols (red color)  don't really line up with actual 'dits' in the signal (green color). This is not a surprise to me.  To build a good model that can generalize the learning you need to have a lot of training material (typically millions of datapoints) and the model needs to have enough neural nodes to capture the details of the underlying signals.  
In this simple experiment I had only 1950 datapoints and 300 hidden nodes. There are only 8  'dit' symbols in the training material - learning CW skill  well requires a lot more material and many repetitions, as any human who has gone through the process can testify. Same principle applies for neural networks.  
Figure 6.  Validation test 


In this experiment I built a proof of concept to test whether Recurrent Neural Networks (especially LSTM variant) could be used to learn to detect symbols from noisy Morse code that has deep QSB fading.  This experiment may contain errors and misunderstandings from my part as I have only had a few hours to play with this Keras Neural Network framework. Also, the concept itself needs still more validation as I may have used the framework incorrectly.

I think that the results look quite promising.  In only 100 epochs the RNN model learned 'dits' from the noisy signal and was able to separate them from 'dah' symbols.  As the validation test shows I overfitted the model to this small sample of training material used in the experiment.  It will take much more training data and larger, more complicated neural network to learn to generalize the symbols in Morse code.  The training process may also need more computing capacity. It might be beneficial to have a graphics card with GPU to speed up the training process going forward.

Any comments or feedback?

Mauri AG1LE

Popular Posts