Ham Radio | gpredict

One thing that’s cool about the world of hamateur radio is that radio waves are all around us. You could be standing in the middle of a transmission going on, and not even know it!

There’s so much that goes on behind the scenes, including the transmission and receiving from satellites around the world. To track these satellites, you can use a cool software called gpredict.

To install, download the .zip file from the GitHub repository.

Make sure you have the dependencies installed:

  sudo apt install libtool intltool autoconf automake libcurl4-openssl-dev
  sudo apt install pkg-config libglib2.0-dev libgtk-3-dev libgoocanvas-2.0-dev

then, in the main folder, run ./autoconfigure.sh

Once that finishes, sudo make and sudo make install to finish the install.

What’s super cool about gpredict is you can use it to remote control the RTL-SDR through gqrx, which I set up in my last blog post.

To set up the remote control, you gotta go to edit > preferences in gpredict. In the interfaces tab, you’ll want to add a new interface.


Next, we need to make sure gqrx is listening on the right port for gpredict’s controlling.

In gqrx, go to tools > remote control settings and make sure you add your local IP address:


I put both…since I wasn’t entirely sure at this moment in time (1AM after a long day of packing).

You’ll also want to set up a home base in gpredict, so it can give you accurate time predictions for when the satellites will be in the atmosphere above you. You can set that up in edit > preferences, in the ground stations tab. From there, add a new one with your location, and it’ll start predicting when the satellites will be in range.

Finally, you can connect gqrx and gpredict by 1) turning on the remote control in gqrx, and 2) finding the drop down menu in gpredict in the top right corner of the program and clicking on the radio control option. That will bring up a window like so:


In gpredict, there will be a little radar-like thing in the bottom left, and right above it it’ll tell you the satellite that will be closest to you, along with a time increment. If you find that satellite in the target settings above, and click Track, gpredict will control gqrx to track that satellite’s frequency, as well as adjust it based on the doppler effect.

In the settings (bottom right of the above little window), make sure to click engage so gpredict will connect to gqrx and start remote controlling it. As you can see, satellite AO-91 was passing over my house in 12 minutes at the time I took the screenshot.

Hearing the satellite AO-91 and SO-50 and catching some CW from it was awesome. It was especially fun to see and hear the satellite moving away by the tone that I heard, and the signals you can see. Pretty neat! Now I have to focus enough to actually start recording the CW that I hear, and start decoding it.

Until next time!


Ham Radio | Technician’s License

A few months ago, I presented the idea to Gector to set UTW up with ham radio. We put it on our list of projects to do, and left it on the back burner.

Two weeks ago, Gector’s dad ordered all three of us ham radios as a spontaneous surprise! He bought us small starter radios, Baofeng UV-5Rs.


They got here after a week, and we wanted to get our licenses before I headed to college, so we had a week to prepare.

There are three licenses you can test for: Technician, General, and Extra. Technician’s was the easiest, and gives you basic operating privileges on the 2 meter band, 6 meter band, 10 meter band, 1.25 meter band, and a few more. You can see the chart here.

I studied out of this manual:


And with these very helpful practice tests:


You can easily pass the Technician’s test with around 6-10 hours of studying, and very easily if you have a solid background in basic circuit theory.

To play with the radios we had while learning about the subject, we got Nagoya NA-771 antennas, which extend the Baofeng’s range, and programming cables to program channels into the radio!


To program the UV-5Rs, you use a software called CHIRP, which you can download here.

sudo apt-add-repository ppa:dansmith/chirp-snapshots
sudo apt-get update
sudo apt-get install chirp-daily

Once that finishes, you can start CHIRP up! You’ll see an empty screen, like this:


Next, you click on the drop-down menu that says Radio. Make sure your radio is off, then go ahead and plug it into a USB port on your computer. Turn it on, and make sure it’s on a frequency that has no activity.

Click on the option Download From Radio, choose the USB port your radio is on, the brand and model, and then it’ll download the currently programmed channels from your radio.

The Baofeng UV-5R has 127 programmable channels, but I’m only using a few for now, since I haven’t found many repeaters to talk to people on (because I can’t until my callsign + license appear in the FCC database). I found my local NOAA (weather) channel and programmed that in, as well as the first five walkie talkie channels (which you can find here).

When I took my licensing test, a nice lady gave me a list of repeaters I was eligible to talk on once I had my call sign, so I programmed one of those in as well. My final setup looked like this:


The repeater I looked up had the information of the tone it needed for me to accurately receive, so in the Tone Mode section I selected Tone (as opposed to DCTS amongst other options), and then entered the 94.8 tone selection in that column.

Once that was finished, I selected the Radio tab once more, and hit Upload to Radio. Boom, I officially programmed my radio channels into it 🙂

Gector, his dad, and I all passed our tests! It takes a little less than a week for our callsigns to appear in the FCC Licensing database, so to pass the time while we can’t transmit, Gector and I got these nifty little USB Software Defined Radio receivers.


There are some tutorials I kinda followed here, but I’ll walk you through what I did as well.

sudo apt-get update
sudo apt-get install libusb-1.0-0-dev

Then install the rtl-sdr drivers:

sudo apt-get install rtl-sdr

Once that was done, I ran rtl_test in the terminal and got this result:

Found 1 device(s):
  0:  T, , SN:

Using device 0: Generic RTL2832U OEM
usb_open error -3
Please fix the device permissions, e.g. by installing the udev rules file rtl-sdr.rules
Failed to open rtlsdr device #0.

Which was weird, because the USB was pretty much plug-and-play on my tower when I tested it out.
But! I unplugged it, plugged it back in, because I realized it can’t read the device when the drivers had just been installed. Now that they were installed, I had to unplug and replug it to get the correct output:

Found 1 device(s):
  0:  Realtek, RTL2838UHIDIR, SN: 00000001

Using device 0: Generic RTL2832U OEM
Detached kernel driver
Found Rafael Micro R820T tuner
Supported gain values (29): 0.0 0.9 1.4 2.7 3.7 7.7 8.7 12.5 14.4 15.7 16.6 19.7 20.7 22.9 25.4 28.0 29.7 32.8 33.8 36.4 37.2 38.6 40.2 42.1 43.4 43.9 44.5 48.0 49.6
[R82XX] PLL not locked!
Sampling at 2048000 S/s.

Info: This tool will continuously read from the device, and report if
samples get lost. If you observe no further output, everything is fine.

Reading samples in async mode...

Which is the correct output! You can Control+C to get out of the test.

Now we need a software that can interface and read this device. GQRX was the first recommended one, which you can download for your OS here: http://gqrx.dk/download

Or if you have linux, you can just run:

sudo apt install gqrx-sdr

Once that’s complete, you can set up the antenna! Either outside or inside works good, I tend to find that outside works better because there’s less physical obstructions for the signals, e.g. less static.

Start gqrx, and fill out the basic info of your device.


Once it looks like that, hit ok, and voila! In the top left corner there’s a pause/play button, and when you press it, it should start looking something like this:


As you can see, there’s a lot of plain static here. You’ll know when you find signals, they look similar to or larger than this (tuned into 98.7 FM radio):


Or here, where the smaller signal on the left is picture data (you can tell by the sound), and the signal on the right is the NOAA channel:


Pretty neat, huh?

Because there’s a lot of static, you’ve gotta adjust the squelch, the gain, and the mode every so often to fit what kind of signal you’re picking up. I typically have the gain close to 10dB, because that’s where I can hear voice transmissions the clearest. Other transmissions come in super clear with the dB all the way up, so you can adjust that based on what signal you’re receiving. As for squelch, what actually is squelch?

Squelch is typically a circuit in the receiver that blocks incoming static, and waits for a signal to be a higher dB gain than the static. When that signal’s dB gain is higher than the static’s dB, the squelch will un-block the sound and allow you to hear the signal coming through. That way you aren’t trying to listen through static the whole time you’re trying to pick up a signal, which can get annoying.

If you’re tracking a very weak signal, though, leaving the squelch as low as possible helps, because if you have a weak signal it’s often not going to trigger the squelch to allow you to hear it.

As for modes:


AM deals with, obviously, AM signals, which are typically 535kHz to 1600kHz. Narrow FM is the most likely mode to pick up voice transmissions across 110MHz and up. WFM (mono) and WFM (stereo) are great for what we know as FM broadcast radio transmissions, which are from 88 to 108 MHz.

LSB and USB have to do with SSB, Single Side Band, one of the most well known modes of voice transmission on High Frequency bands in radio. LSB stands for Lower Side Band, while USB stands for Upper Side Band. Wikipedia has a great picture for demonstrating the difference between the two:


The carrier is the main signal, surrounded by the LSB and USB. So wherever you’re listening to the frequency, you can change the mode here to see which one sounds better/clearer for your receiver.

CW stands for “continuous wave”, which is basically International Morse Code. Again, the -L and -U stand for the Lower and Upper sidebands of CW. I’ve come across more CW in the narrow FM mode than the CW modes, so I’m not sure what’s up with that. I’ll have to play around with it some more to see if there’s a huge difference.

And there you have it! Changing up the frequency, the gain, the modes, and the squelch can help you tune your antennas to hear the best stuff. If you’re ever curious about what certain signals sound like, Gector found this great resource to start identifying the different signals and what they sound like: https://www.nonstopsystems.com/radio/radio-sounds-orig.html

I’m hoping to get these antennas somewhere higher up so I can receive better signals, but with moving into college this week it’s going to have to wait a little bit. I should have my call sign in these next few days as well, so stay tuned! (no pun intended…maybe.)

Here is a great tutorial on the rest of what you can do with the RTL-SDR USB.

happy hamming!


When Deja-Dup Failed Me

Since I started using Linux 2-ish years ago, I have messed up the partitions on my laptop multiple times trying to fiddle with stuff. Through that, I very quickly learned the lesson of computer backups.

Due to messing up the partitions on my disk (I think there was 12 random partitions), I’ve been wanting to to a clean wipe and reinstall *only* ubuntu as my main operating system, deleting all other partitions.

I rebooted my laptop yesterday, and realized I had moved my system configuration file folder, which ruined all the symlinks I had set up. I moved that folder back to where the symlinks were pointing to, but everything derped and started malfunctioning. I had just backed up my laptop a few days prior, so I figured this was the defining moment to wipe everything and restore from backup.

Good hecking heck.

I clean installed, wiped everything, got my backups all ready to go, launched deja-dup, and attempted to restore from backup. I kept getting error after error after error that it could not find the backup to restore from. This was because instead of backing up to the same folder destination every time, I had started a new backup folder with the date I backed it up on. Deja-dup couldn’t find multiple backups in a folder, because I had so many different backup folders. Apparently that’s not how you’re supposed to use deja-dup haha.

I consulted the trusty UTW systemadmin channel we have and, fortunately, uelen was still up, and with his help we figured out how to retrieve the files from the apparently un-readable backup.

Deja-dup stores its backups as incremental .tar files, so as long as we could unpack those we could get my files back. But doing them by hand was a pain….until we found this: https://wiki.gnome.org/Apps/DejaDup/Help/Restore/WorstCase

Deja-dup uses a package called duplicity, which you can conveniently use on the command line as well!

I made a directory to restore to:

mkdir -p /tmp/restore

Then ran this command:

duplicity --gio --no-encryption file:///media/thallia/name-of-backup-drive /tmp/restore

and boom. After a few minutes of waiting, all my files were restored from backup to /tmp/restore. 🙂

I moved them all over to my current home directory, re-symlinked, installed some new programs, and now my laptop is working almost like a charm again!

There are a few things I still need to install and reconfigure, but I’m really happy I got this done and out of the way before I move into college this coming week.

Lesson reinforced: ALWAYS have a backup of your computer before doing *anything* to the hard disk.


Ubuntu 18.04 | Upgrades, install scripts, and more

Today I ran lsb_release -a on my laptop and tower, and to my astonishment I was very behind on the upgrades.

My laptop was running Ubuntu 17.10, and my tower was running 16.04. Jeez.

Since 17.10 was the newest Ubuntu using GNOME instead of Unity, I was able to upgrade from the command line to the 18.04 version. Before that, I backed up both of my computers to be safe.

I ran all these:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo apt autoremove
sudo do-release-upgrade

Ubuntu will ask you a few questions and then download everything. After a reboot, my laptop worked like a charm!

My tower was another story, however. I ran that same list of commands, but it didn’t recognize that 18.04 or 17.10 were available to upgrade to, so I ended up just doing a clean install of Ubuntu 18.04, then running Gector’s cool new install script for new systems. I plan on writing a script similar to it, with my configurations and most-used software of course.

Once I clean installed, I had to use xrandr to let the computer know that my monitors are stacked on top of each other. As I tried xrandr --output DVI-D-1 --above HDMI-1 my entire screen froze, and the bottom monitor shut off.

Super weird.

I  realized that my GTX 1050TI drivers weren’t installed on this clean install, so I figured that was the next place to go.

Running ubuntu-drivers devices listed my graphics card and the recommended driver that went with it. Then I executed sudo ubuntu-drivers autoinstall and let it install all the drivers I needed.

After a reboot, I went to arandr this time and created my screen configuration, and boom! It worked 😀

The next steps are to go through the main configurations I have for tmux, vim, i3, polybar, and fish, then make copies of those configurations and edit them for my tower computer. You can see my collection of configurations here: https://gitlab.com/thallia/dotconfig

After figuring out all the configurations, I wrote a bash new_install.sh script, which you can find in my dotconfig repository linked above. This installs all the general programs I use and configures the new system to my keybindings, vim config, and tmux keybindings so everything can be used like normal. That way every time I have a new system, I don’t have to spend an hour re-configuring it to my preferences.

To test the script, I installed VirtualBox, a virtual machine software for linux, Mac, and windows.

Once installing the .deb package with sudo dpkg -i /path/to/virtualbox/file.deb, I launched the software and answered the basic questions for making a virtual machine. Then it asked for an optical drive, in which I had no clue what to give it.

Apparently an optical drive is either a bootable USB with the operating system on it, or a .iso image file with the operating system. So I went and downloaded Ubuntu 18.04 Bionic Beaver.

When I tried loading the vm again, it gave me this error:

this kernel requires an x86-64 CPU, but only detects an i686 CPU, unable to boot

That made me grumpy.
But I sucked it up and went to stackoverflow and found that if you set your virtual machine to 2GB of disk space (which I did), you can’t support the 64-bit versions. So clearly the way to go was download the Ubuntu 16.04 (32-bit) version of linux. If that doesn’t work, might as well try to change the settings.

While waiting for the ubuntu torrent to download, I searched around and attempted this solution, but stack overflow said if that one still doesn’t work then my hardware doesn’t support it. *tears*

Wow. So I tried the new 32-bit version, and a critical system failure occured. Great. Then I remembered my mac laptop: which happens to gloriously work with everything!

I set up virtualbox on linux there, all the same settings, loaded the .iso, and boom. Ubuntu 18.04 (64-bit) worked wonderfully. Aside from the fact that it seemed to freeze. Sometimes that can happen if you don’t give the vm enough space, so I edited some of the storage settings and rebooted the virtual machine.

That fixed the freezing 🙂 I went to my gitlab, downloaded my new install script, and ran it.

So many errors 😦

But that’s cool! I did pretty well for scripting the whole thing. I just got some package names wrong, so I went and looked them up, fixed them, pushed to gitlab, and tried again.

After debugging for a few, it finally worked! There are two things still left to debug, installing rofi manually and debugging why the script won’t cd into a YCM directory.

Other than that, I’d say this project was successful.

Until next time! 😀


Filesharing with Rsync

For the past month or so, I’ve been attempting to find a way to file share between my main laptop and tower computer, with my server as the middle-man between the two that holds the main file repo. I wanted it to be the kind of file-tracking system where if one file is changed on one system, it’ll sense it and update the others.

At first I considered a private Git repository, but was unsure about the security and also the fact that I’d have to pay more money for private repositories. Ugh.

Then I considered Samba, but it seemed a little over the top for what I was thinking.
Gector said I should consider making my own, but I don’t feel confident in my own experience to go full-on project with that right before I leave for college, when my own design could fail.

To predate this a little – I’ve been listening to a lot of cybersecurity and hacking podcasts lately during my long drives between work and home (around an hour, which is a great time chunk to download information into my head from stuff like podcasts). I recently finished an awesome series called Darknet Diaries, which is a dude who interviews people and tells stories about hacking and exploits. From the website:

Darknet Diaries is a podcast covering true stories from the dark side of the Internet. Stories about hackers, defenders, threats, malware, botnets, breaches, and privacy.

I highly recommend the podcast, it’s super entertaining and very educational in certain security aspects. There is language in some of the episodes, so be aware of that before taking a listen. But don’t let it stop you, the listen is worth it!

So how does this relate to my rsync server? I’ve been listening to another networking and security podcast called Section 9. I’ve only gone through a few of their episodes, but they are definitely interesting and fairly educational for me. They mentioned in their DHCP Failover #71 episode a service called rsync, which I’ve heard about, but never had a use for before. Looking it up again, it’s the exact thing I’ve been searching for for my file-tracking system.

Rsync is a file transferring and synchronization setup that you can use over SSH (or an unencrypted transfer thing, which isn’t a very good idea if you’re transferring information you don’t want meddled with). By itself it is not encrypted at all, but if it called going through an SSH connection, everything will be encrypted through SSH, which is exactly what I want.

First I set up a pair of SSH keys between my server and laptop (I’m away from home, so I can’t access my tower until I get back).

To set that up, I followed DigitalOcean’s great guide. They have some awesome stuff.


Should do the trick for ya.
It’ll display something along the lines of:

Generating public/private rsa key pair.
Enter file in which to save the key (/thallia/.ssh/id_rsa):

Which is typically the best directory to go with, unless you had another one in mind.

Enter passphrase (empty for no passphrase):

If you want an extra layer of security, you can enter a passphrase here. It’s just like a password, and it’ll prompt you to enter it every time you SSH into the server to verify the key. It’s a good setting to have, but not required.

Finally it’ll display a success message, and you can copy the public key to the remote server or other file-sharing device you want! To do that:

ssh-copy-id username@remote_host

And that should be all set up good.

Next, I had to organize all the files I wanted to sync into a folder. There’s a project/life/config/all things folder that I have set up, but not entirely organized, so I went through and figured that all out so configs wouldn’t mess up the systems I have going on (like fish, tmux, i3, etc).

Right now, I only want to push my laptop’s directory with all the goodies to my remote server. To do that, this command:

rsync -a ~/dir1 username@remote_host:destination_directory

Should do the trick. The -a flag “is a combination flag. It stands for “archive” and syncs recursively and preserves symbolic links, device files, modification times, group, owner, and permissions. It is more commonly used than -r and is usually what you want to use.” (from digital ocean’s rsync tutorial).

I’ll replace ~/dir1 with the directory I want to copy, and my server’s login@ip-address:and/the/destination/directory/for/the/copied/files.

I had a lot of stuff in that copied directory, so it took a while, but it displayed every file there was during the syncing process.

Next step was to do it on my tower computer at home! This was an easy change, I generated the keys between the tower and my server, then ran this rsync command to sync from the server to the tower:

rsync -av username@remote_host:source_directory ~/destination/on/tower

and  soon enough I had my tower synced up with my laptop 🙂

But to keep it this way, I don’t want to keep having to run these commands back and forth between computers. So I wrote a little bash script to fix that.

The script goes through and asks whether I’m at home, college, or somewhere else. Depending on which answer I give, different IP addresses will be used for rsync to be able to access my traveling laptop. Once that’s figured out, it runs the push and pull rsync commands and syncs all my computers up to my server.

I set the script up on my server with a cronjob, which is a timed-interval scheduling tool where you can run commands every time-interval that you want. Crontab syntax was weird looking and confusing, until I found this:

*     *     *   *    *        command to be executed
-     -     -   -    -
|     |     |   |    |
|     |     |   |    +----- day of week (0 - 6) (Sunday=0)
|     |     |   +------- month (1 - 12)
|     |     +--------- day of        month (1 - 31)
|     +----------- hour (0 - 23)
+------------- min (0 - 59)

I wanted my script to be run every 24 hours, so my the line in my crontab -e file looked like this:

* 23 * * * ./rsync.sh

To see whether the job ran, you can check out /var/log/syslog and scroll to the very bottom to see whether or not the cronjob has executed or not. 🙂

And now I have a solid filesharing system between my computers!

The only thing missing is a few public-IP features that I haven’t grabbed yet. Once I get those other IPs that my laptop connects to then the script will truly be finished. But the skeleton is certainly working, and I’m very happy about that, especially since it’s my first script.

happy scripting & syncing!