Make Your Own Private Bitcoin Node to Anonymize Transactions over Tor

This article has been updated to include instructions to update Tor to the latest version, as I discovered it was using a really old version in the repositories. There was also an addition to tell the systemd services to safely shutdown bitcoin when it is stopped. There has also been an amendment to the bitcoin.conf file I made. You can find the amendments in the Tor, Installing Bitcoin Core, and Autostart sections

Chances are you’ve heard of Bitcoin, the anonymous and secure cryptocurrency which has made waves over many years. One of the main issues I see is that people are trusting others to handle their transactions. So I set about purchasing a tiny Dell netbook with a measly Intel Atom CPU, 2GB of RAM, and a 240GB SSD to act as my primary wallet for cryptocurrency, and is more or less my bank. The laptop has a fully encrypted drive, and I back up the keys for my wallet and have them in three different places. However when you are running a full version of the core wallets, that means you have to store a whole copy of the entire block chain on the device using it. Currently the Bitcoin blockchain is around 200GB if I recall, and that’s a lot of data to hold onto just to transact. Not to mention the whole idea of my netbook was to only be on when I needed to transact as it’s most secure when it is powered off. So obviously running the blockchain on the laptop was not the most ideal option as it would always have to be on. Not to mention I want to further anonymize the connections coming in and out, so I wanted to tunnel all of the traffic for the node over a VPN such as Private Internet Access, with a VPN killswitch so if the VPN doesn’t work it won’t connect, as well as bolt The Onion Relay (TOR) on top of it to further anonymize the entire transactions. The advantage of all this will be to allow any device on my LAN to transact with the blockchain network directly using my node to send and receive my transactions rather than trusting other people. The other advantage this has is since I am running a full copy of the blockchain I am also helping support the Bitcoin network by providing another peer with a full copy of the blockchain. However this guide will not cover the VPN aspect, but we will cover how to bolt on Tor as well as build your own node entirely.

I originally got the idea from pinode.co.uk where they have a lot of these projects, and I am already running a Monero node on a Pi 3B+ with a 128GB flash drive. But for this tutorial I am merging the ideas from pinode, along with the Thundroid tutorial, and adding some of my own twists and spins. I chose an Odroid Home Cloud 2 as it allows for a native SATA hard drive, and the HC2 variant allows for 3.5″ disks where as the HC1 allows for 2.5″ disks. Either version is fine, and if you really wanted you could probably go for the straight XU4 version, or even a Raspberry Pi 3B+ if you use a 512GB flash drive or larger, or a kit to allow additional drives. But for the sake of this tutorial we will be discussing the Odroid platform, however you can use whatever platform you like. Technically if you wanted to, you could use a full dedicated PC, but that seems like a waste of hardware and will be far less power efficient. I prefer the Odroid over the Raspberry Pi as it’s a more powerful hardware platform.

Hardware (all links are to Ameri Droid as I am in the USA):

Odroid Home Cloud 2 with RTC battery, 1TB Seagate Iron Wolf HDD, and Model 3 Wifi NIC.

The reason I specified using a NAS drive is that this drive will be on 24/7 and always writing as well as reading data. NAS drives are specifically optimized for this kind of behavior, and will therefore be more reliable. You can use a non-NAS drive just fine, but in the long term a NAS drive is best.

Odroid HC2 pictured in the clear acryllic case option with the wireless NIC inserted, and 16GB flash card before it was flashed with the Ubuntu 18.04.1 OS

Optional Hardware:

Odroid HC2 being worked on at my desk with the UART connector kit pictured.

First things first we have to connect it to the internet. So if you are planning on using wifi please follow the wiki here for nmcli for the Odroid. If using the UART console connection follow this tutorial here. You will need to flash Ubuntu 18.04 minimal image which can be downloaded here and then use Etcher to flash it to a MicroSD. Once that is down put it in the Odroid, and boot it up and either SSH or connect via console to it. Either way the credentials on start are:
username: root
password: odroid

For Raspberry Pi users you will have to look up the credentials for the image you are using.

Prep-Work

We’ll need to take care of some things first before we actually make it a Bitcoin node. So first let’s create a new user with a secure password and superuser rights and change the root password. Don’t forget to change “USER” to what you want.

[email protected]:~# passwd
[email protected]:~# adduser USER
[email protected]:~# usermod -aG sudo USER
[email protected]:~# adduser bitcoin

Now we need to update the system and change the timezone and locale data, as well as change the hostname in both /etc/hosts and /etc/hostname to match. I named mine “btcdroid” but you can make it whatever you want:

[email protected]:~# apt update
[email protected]:~# apt dist-upgrade -y
[email protected]:~# apt install htop git curl bash-completion jq
[email protected]:~# dpkg-reconfigure tzdata
[email protected]:~# dpkg-reconfigure locales
[email protected]:~# nano /etc/hosts
[email protected]:~# nano /etc/hostname

Mount the Hard Drive

Now we need to mount the hard drive. In my case the hard drive was brand new and unformatted, so I had to do that first, you can follow the instructions here at Digital Ocean if you are in the same situation. Regardless, once you have a formatted drive compatible with Linux we can proceed.

We will need to get the UUID of the partition that has been created. To do that is very simple we run the lsblk command and it will spit out the names and UUID of all drives.

[email protected]:~# lsblk --fs

After running that command you should see something like this. We will need to notate the UUID it has given us for the next steps.

Now we need to edit the fstab with nano and add a whole new line. Replace 123456 with the UUID given from the command above

[email protected]:~# nano /etc/fstab
# New Line in /etc/fstab
UUID=123456 /mnt/hdd ext4 noexec,defaults 0 0 

Awesome, now the fstab has been modified and we need to create the mount point, mount it, check it, and set the owner.

[email protected]:~# mkdir /mnt/hdd
[email protected]:~# mount -a
[email protected]:~# df /mnt/hdd 

At this point if everything was done correctly you should see something similar to this.


Now let’s give permissions to that entire hard drive to the bitcoin user we made earlier

[email protected]:~# chown -R bitcoin:bitcoin /mnt/hdd/ 

Moving Swap to the HDD

Now we need to move the swap file to the HDD. So we need to install a package and then do some configuration changes.

[email protected]:~# apt install dphys-swapfile
[email protected]:~# nano /etc/dphys-swapfile
#Add the following lines
CONF_SWAPFILE=/mnt/hdd/swapfile
CONF_SWAPSIZE=2048
[email protected]:~# dphys-swapfile setup
[email protected]:~# dphys-swapfile swapon
[email protected]:~# shutdown -r now 

Hardening The Security

Now it should be reconfigured to have a 2GB swap file on the hard drive, and should be rebooting. At this point log back in as the regular user and not as root. Because we are about to disable root via SSH, however if you are using the optional UART serial connection kit, you can still login as root that way. Now let’s continue on and remove the old swap file.

SSH Hardening

We need to lock down remote access to SSH, and Digital Ocean has a great guide going over SSH security. I highly recommend disabling password logins and requiring an SSH key pair to be generated. You can read the tutorial here, but we will definitely need to be disabling root access as well. It’s a major security risk if root is allowed, as everyone knows Linux has a root user.

Type the following command to edit the sshd_config file.

[email protected]:~$ sudo nano /etc/ssh/sshd_config

#Find the following line PermitRootLogin yes #Change it to no so it looks like below PermitRootLogin no #Save and quit
[email protected]:~$ sudo service sshd restart

That will disable root login, but again I highly recommend making it only allow logins with SSH key pairs as it is far more secure than a password.

Firewall

So one of my favorite tools, which I have written about before is uncomplicated firewall. We are going to allow only pinholes for the firewall to allow communication through as well as limit ssh connections so it is less likely to be brute forced. We will also be adding some defenses for brute forcing in a bit.

The line ufw allow from 192.168.0.0/24, below assumes that the IP address of your btcdroid is something like 192.168.0.xxx, the xxx being any number from 0 to 255. If your IP address is 12.34.56.78, you must adapt this line to ufw allow from 12.34.56.0/24. Otherwise you will lock yourself out for good unless you connect the UART serial connection kit.

[email protected]:~$ sudo apt install ufw
[email protected]:~$ sudo ufw default deny incoming
[email protected]:~$ sudo ufw default allow outgoing

# make sure to use the correct subnet mask and IP ranges. (see warning above)
[email protected]:~$ sudo ufw allow from 192.168.0.0/24 to any port 22 comment 'allow SSH from local LAN'
[email protected]:~$ sudo ufw allow 9735 comment 'allow Lightning'
[email protected]:~$ sudo ufw allow 8333 comment 'allow Bitcoin mainnet'
[email protected]:~$ sudo ufw allow 18333 comment 'allow Bitcoin testnet'
[email protected]:~$ sudo ufw enable
[email protected]:~$ sudo systemctl enable ufw
[email protected]:~$ sudo ufw status

Now we should install Fail2Ban, which I have talked about often. This will make it so after five unsuccessful attempts at SSH it blocks the IP for ten minutes. Making a brute force almost impossible to conduct.

[email protected]:~$ sudo apt install fail2ban

Increase open file limit

In case your BTCDroid is swamped with internet requests (honest or malicious due to a DDoS attack), you will quickly encounter the can't accept connection: too many open files error. This is due to a limit on open files (representing individual tcp connections) that is set too low.

Edit the following three files, add the additional line(s) right before the end comment, save and exit.

[email protected]:~$ sudo nano /etc/security/limits.conf
#add/change the following lines
*    soft nofile 128000
*    hard nofile 128000
root soft nofile 128000
root hard nofile 128000

[email protected]:~$ sudo nano /etc/pam.d/common-session #add the following session required pam_limits.so
[email protected]:~$ sudo nano /etc/pam.d/common-session-noninteractive #add the following session required pam_limits.so

Installing Bitcoin Core

We’re finally ready to start with the fun parts. These parts were mostly derived from pinode.co.uk, but seem to work perfectly fine for the Odroid HC2, albeit with some tweaks we have already performed specific to the Odroid platform.

First we need to install our dependencies:

[email protected]:~$ sudo apt install autoconf libevent-dev libtool libssl-dev libboost-all-dev libminiupnpc-dev -y 

Now we need to make a directory to download our files into, and ultimately download those files using git

[email protected]:~$ mkdir ~/bin
[email protected]:~$ cd ~/bin
[email protected]:~$ git clone -b 0.17 https://github.com/bitcoin/bitcoin.git

Now after it’s downloaded we are going to configure, compile, and install the files. Now I tell it in the final commands to run six jobs at the same time since the Odroid has eight cores so it can run faster. You may want to reduce that number to two with a Raspberry Pi. You can also run it without the “-jX” switch to just run as a single job, although that may take a couple hours. Once you run the make command, go make dinner or something because this will take an hour or two even on the Odroid XU4’s eight core Samsung Exynos 5422 CPU.

[email protected]:~$ cd bitcoin
[email protected]:~$ ./autogen.sh
[email protected]:~$ ./configure --enable-upnp-default --disable-wallet
[email protected]:~$ make -j6
[email protected]:~$ sudo make install

Now we need to prepare the Bitcoin directory, we’re going to switch into the non super user we created earlier which we named bitcoin, although you can name it whatever you want. The most important thing is that this user only have permissions to administrate the bitcoin node itself and not able to make any system changes. This is the great thing about Linux in regards to security and permissions versus Windows. This in theory should isolate an attack so at worst they can mess with just the bitcoin systems and not the operating system itself.

We use the Bitcoin daemon, called “bitcoind”, that runs in the background without user interface and stores all data in the directory /home/bitcoin/.bitcoin. Instead of creating a real directory, we create a link that points to a directory on the external hard disk.

[email protected]:~$ sudo su bitcoin

# add symbolic link that points to the external hard drive
[email protected]:~$ mkdir /mnt/hdd/bitcoin
[email protected]:~$ ln -s /mnt/hdd/bitcoin /home/bitcoin/.bitcoin

# Navigate to home directory and check the symbolic link (the target must not be red). 
[email protected]:~$ cd ~
[email protected]:~$ ls -la

Now we need to configure the Bitcoin daemon, and make sure to set an extremely secure password and username seperate from your username and password on the system, and then we will log out of the bitcoin user to setup Tor.

[email protected]:~$ nano /home/bitcoin/.bitcoin/bitcoin.conf

# BTCDroid: bitcoind configuration
# /home/bitcoin/.bitcoin/bitcoin.conf

# Bitcoind options
server=1
daemon=1
txindex=1
disablewallet=1

# Connection settings
rpcuser=SECURE_USERNAME
rpcpassword=SECURE_PASSWORD

# Optimizations for Odroid Hardware
dbcache=192
maxorphantx=60
maxmempool=192
maxconnections=80
maxuploadtarget=5000



#Optimizations for Raspberry Pi 3B.
#I commented out the ones for the ones I recommend for a Raspberry Pi 3B, just uncomment those, and comment out the Odroid ones for it to work
#dbcache=96
#maxorphantx=30
#maxmempool=96
#maxconnections=40
#maxuploadtarget=5000

[email protected]:~$ exit

Tor IT Up

Now we get to install Tor to encapsulate all the traffic and encrypt as well as anonymize everything. So we are going to install Tor, but also add a repository to give us the most up to date Tor version, as the one in the default repositories is really old.

First we will be adding a couple entries to /etc/apt/sources.list.d/, add the GPG key to accept it, update our repository, and finally install Tor.

[email protected]:~$ sudo nano /etc/apt/sources.list.d/tor.list
#Add the following lines and then save and close
deb https://deb.torproject.org/torproject.org bionic main
deb-src https://deb.torproject.org/torproject.org bionic main
#save and exit
[email protected]:~$ curl https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --import
[email protected]:~$ gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -
[email protected]:~$  sudo apt update
[email protected]:~$  sudo apt install tor deb.torproject.org-keyring tor-arm nyx

Now we need to configure Tor

[email protected]:~$ sudo nano /etc/tor/torrc
#add these settings to the bottom of the file
ControlPort 9051
CookieAuthentication 1
CookieAuthFileGroupReadable 1
HiddenServiceDir /var/lib/tor/bitcoin-service/
HiddenServicePort 8333 127.0.0.1:8333
#save and exit
[email protected]:~$ sudo systemctl restart tor.service
#Get your Tor hostname
[email protected]:~$ sudo cat /var/lib/tor/bitcoin-service/hostname

That hostname it spits out with a “.onion” address, we’re going to need that, so notate what it just gave us with the last command.

configure everything to autostart

Now we need to program everything to start on boot, so we will make a SystemD service that will start our Bitcoin node how we would like it to start with it running as the bitcoin user and passing it through to Tor. Now I will give you the option at this point to either have it run only on Tor, or to allow it to run over Tor, IPv4, and IPv6. The Tor only option is more anonymous, but the other mode is like a dual mode so if Tor is down it can still sync, but it also will sync faster. The choice is yours, just remove the comment for the one you want where it says ExecStart and don’t forget to insert your hostname.onion we pulled from earlier where it asks for it in the ExecStart command. After that we will reboot and see if everything works. Make sure to put your username we created earlier on where it says USER_NAME

[email protected]:~$ sudo nano /etc/systemd/system/bitcoind.service

# BTCdroid systemd unit for bitcoind
# /etc/systemd/system/bitcoind.service

[Unit]
Description=Bitcoin daemon
After=network.target

[Service]
#Uncomment the ExecStart string below to force the node to only run over Tor
#ExecStart= /usr/local/bin/bitcoind -datadir=/home/bitcoin/.bitcoin/data -daemon -proxy=127.0.0.1:9050 -externalip=HOSTNAME.onion -conf=/home/bitcoin/.bitcoin/bitcoin.conf -listen -bind=127.0.0.1 -pid=/run/bitcoind/bitcoind.pid

#Uncomment the ExecStart string below to allow Tor, IPv4, and IPv6 connections
#ExecStart= /usr/local/bin/bitcoind -datadir=/home/bitcoin/.bitcoin/data -daemon -proxy=127.0.0.1:9050 -externalip=HOSTNAME.onion -conf=/home/bitcoin/.bitcoin/bitcoin.conf -listen -discover -pid=/run/bitcoind/bitcoind.pid

#Tells Bitcoin to shutdown safely when stopped. 
ExecStop= /usr/local/bin/bitcoin-cli stop


# Creates /run/bitcoind owned by bitcoin
RuntimeDirectory=bitcoind
User=bitcoin
Group=bitcoin
Type=forking
PIDFile=/run/bitcoind/bitcoind.pid
Restart=on-failure

# Hardening measures
####################

# Provide a private /tmp and /var/tmp.
PrivateTmp=true

# Mount /usr, /boot/ and /etc read-only for the process.
ProtectSystem=full

# Disallow the process and all of its children to gain
# new privileges through execve().
NoNewPrivileges=true

# Use a new /dev namespace only populated with API pseudo devices
# such as /dev/null, /dev/zero and /dev/random.
PrivateDevices=true

# Deny the creation of writable and executable memory mappings.
MemoryDenyWriteExecute=true

[Install]
WantedBy=multi-user.target
#save and exit
[email protected]:~$ sudo systemctl enable bitcoind.service
[email protected]:~$ sudo shutdown -r now 
[email protected]:~$ mkdir /home/USER_NAME/.bitcoin
[email protected]:~$ sudo cp /home/bitcoin/.bitcoin/bitcoin.conf /home/USER_NAME/.bitcoin/
[email protected]:~$ sudo chown USER_NAME:USER_NAME /home/USER_NAME/.bitcoin/bitcoin.conf

Now it should be restarting so give it a minute and reconnect as the user we created in the beginning. It may take a few minutes for the node to get its first connections, and then it will start pulling in the blocks. You can check the status with the bitcoin-cli command.

[email protected]:~$ bitcoin-cli getblockchaininfo 

It should display something like this, and as long as the number of blocks is increasing every few minutes, it is running fine. Bare in mind this could take a few days as we need to download at least 200GB at the time of writing to be up to date with the block chain.

Output of bitcoin-cli

In addition to checking the status of the blockchain download, you can monitor the traffic over Tor with Nyx.

[email protected]:~$ sudo nyx 
Seeing the traffic via Tor on Nyx

Auto Update Security Patches

Since this is a device we are going to leave on and unattended most likely. It’s best we have it auto apply any of the security related patches that may be out there so it can maintain itself. So let’s enable the unattended-upgrades package and configure it. The first step brings up an interactive prompt, and then we proceed to editing the files.

[email protected]:~$ sudo dpkg-reconfigure --priority=low unattended-upgrades
[email protected]:~$ sudo nano /etc/apt/apt.conf.d/50unattended-upgrades
#modify these lines in the file to look like the following, although you can make it reboot whenever you want. Make sure there is a semicolon at the end of each line. You can uncomment the "${distro_id}:${distro_codename}-updates"; line if you want it to update non security related packages too

#near the top of the file
Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}";
        "${distro_id}:${distro_codename}-security";
        // Extended Security Maintenance; doesn't necessarily exist for
        // every release and this system may not have it installed, but if
        // available, the policy for updates is such that unattended-upgrades
        // should also install from here by default.
        "${distro_id}ESM:${distro_codename}";
//      "${distro_id}:${distro_codename}-updates";
//      "${distro_id}:${distro_codename}-proposed";
//      "${distro_id}:${distro_codename}-backports";
};

#below are spread out in the same file
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "02:30";
#save and exit 

That’s it, you’re all finished. Let me know what you think or if you have any improvements to the project. I may eventually be hosting these on a Supermicro server in my rack with a ZFS array next year.

How to make a superfast and lightweight WordPress server hosting multiple sites

So after writing the long tutorial about how to make a superfast and lightweight stack on Ubuntu 18.04 before (Read here if you haven’t already) I am going to show you how to modify this stack a bit to host multiple websites, and it’s redoing some of the work we have done prior. I used some of these same steps, with tweaks for my own security setup, when consolidating beinglibertarian.com, rationalstandard.com, and think-liberty.com, but of course when I moved these sites they were already established with content. So before doing anything, I first made a backup using Updraft Plus, and I have mine set to backup to Digital Ocean S3 buckets so I waited for them to upload. If you don’t have them backing up remotely, I highly recommend that you do, but if you don’t you can download the archives through Updraft Plus. Once you have a backup, let’s proceed.

Thoughts and Considerations:

This server will be hosting more than one webserver, you will most likely be needing more resources than the base Digital Ocean VPS, so plan accordingly.

Make the Databases:

So for the sake of this article I will refer to three separate domains. example1.com, example2.com, and example3.com. You’ll first want to make sure on this server you have at least done all of the following in the prior article I linked above. If you’ve already made a database for one of the sites on this server, skip making the DB and such for that one, and wait until we get to modifying Nginx. But for this article we will assume you have a server with resources for three sites, and it’s a blank server ready for databases.

 sudo mysql -u root -p

Type in your password when prompted. This will open up a MariaDB shell session. Once into the MariaDB console it’s time to make three databases, add more or less depending on your requirements. Everything you type here is treated as a SQL query, so make sure you end every line with a semicolon! This is very easy to forget. Here are the commands you need to type in to create a new database, user, and assign privileges to that user:

MariaDB [(none)]> CREATE DATABASE ex1wpdb DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci; 
MariaDB [(none)]> GRANT ALL ON ex1wpdb.* TO 'ex1wpdbuser'@'localhost' IDENTIFIED BY 'securepassword1';
MariaDB [(none)]> CREATE DATABASE ex2wpdb DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci; 
MariaDB [(none)]> GRANT ALL ON ex2wpdb.* TO 'ex2wpdbuser'@'localhost' IDENTIFIED BY 'securepassword2';
MariaDB [(none)]> CREATE DATABASE ex3wpdb DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
MariaDB [(none)]> GRANT ALL ON ex3wpdb.* TO 'ex3wpdbuser'@'localhost' IDENTIFIED BY 'securepassword3';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> quit

Note that although it’s customary to use ALL CAPS to write SQL statements like this, it is not strictly necessary. Also, where I’ve used exwpdb, exwpdbuser, and securepassword make sure to put your own choices and make each username, password, and database different for each site. The last thing you want is someone knowing you had an easy to guess database name and password.

Create the web folders:

So each site will need its own folder for its own files and such to be hosted. So if you followed my prior article and placed the files in /var/www/html you’ll want to create a subfolder and move the files there. So if you have done that do the following with your own domain name of course.

sudo mkdir /var/www/html/example1.com
sudo mv !(/var/www/html) /var/www/html/example1.com

This will have moved all the files into the subfolder. But if you are starting fresh and this is not currently hosting any files, just creating the sub directories yourself.

 sudo mkdir /var/www/html/example1.com
sudo mkdir /var/www/html/example2.com
sudo mkdir /var/www/html/example3.com

Modify Nginx:

First we are going to have multiple sites lets remove the “default” site config and symlinks

sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/example1.com
sudo rm /etc/nginx/sites-enabled/default
sudo ln -s /etc/nginx/sites-available/example1.com /etc/nginx/sites-enabled/example1.com

In the last article we had you edit /etc/nginx/sites-available/default which you can see on the original post.Well we are going to remove some of the top lines and modify some other settings. I will put the changes in Bold and comment information to know if relevant.

#modify the /var/run/nginx-cache to be slightly different e.g. make a /var/run/nginx-cache, /var/run/nginx-cache2, and etc. 
#The keys_zone variable must be different for each site as well and mach up the PHP variables set further down.
fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=EXAMPLECOM1:100m inactive=60m;
#Three lines were removed, we are placing these somewhere else. So copy the commented out lines for later
#fastcgi_cache_key "$scheme$request_method$host$request_uri";
#fastcgi_cache_use_stale error timeout invalid_header http_500;
#fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

server {
#remove any mention of default_server
  listen 80;
  listen [::]:80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  #Comment out the lines about an ssl_certificate unless you've already provisioned it.
  #ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem; # managed by Certbot
  #ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem; # managed by Certbot
  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
 
  # force redirect to HTTPS from HTTP. COMMENT OUT IF YOU HAVEN'T DONE LETSENCRYPT
  #if ($scheme != "https") {
  #  return 301 https://$host$request_uri;
  #}
 
  client_max_body_size 256M;
  #Must make a different folder for each site. I like to make them each their own folder in /var/www/html but do whatever feelts most comfortable to you
  root /var/www/html/example1.com;
  index index.php index.html;
 
  server_name example.com www.example.com;
 
  set $skip_cache 0;
 
  if ($request_method = POST) {
    set $skip_cache 1;
  }
 
  if ($query_string != "") {
    set $skip_cache 1;
  }
 
  if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
    set $skip_cache 1;
  }
 
  if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
    set $skip_cache 1;
  }
 
  location ~ /purge(/.*) {
    fastcgi_cache_purge EXAMPLECOM1 "$scheme$request_method$host$1";
  }
 
  location / {
    try_files $uri $uri/ /index.php?$args;
    #The zone name must match what will be set in the next step. Set a different zone name for each and remember what you set
    limit_req zone=ex1 burst=50;
  }
 
  # Turn off directory indexing
  autoindex off;
 
  # Deny access to htaccess and other hidden files
  location ~ /\. {
    deny  all;
  }
 
  # Deny access to wp-config.php file
  location = /wp-config.php {
    deny all;
  }
 
  # Deny access to revealing or potentially dangerous files in the /wp-content/ directory (including sub-folders)
  location ~* ^/wp-content/.*\.(txt|md|exe|sh|bak|inc|pot|po|mo|log|sql)$ {
    deny all;
  }
 
  # Stop php access except to needed files in wp-includes
  location ~* ^/wp-includes/.*(?<!(js/tinymce/wp-tinymce))\.php$ {
    internal; #internal allows ms-files.php rewrite in multisite to work
  }
 
  # Specifically locks down upload directories in case full wp-content rule below is skipped
  location ~* /(?:uploads|files)/.*\.php$ {
    deny all;
  }
 
  # Deny direct access to .php files in the /wp-content/ directory (including sub-folders).
  # Note this can break some poorly coded plugins/themes, replace the plugin or remove this block if it causes trouble
  location ~* ^/wp-content/.*\.php$ {
    deny all;
  }
 
  location = /favicon.ico {
    log_not_found off;
    access_log off;
  }
 
  location = /robots.txt {
    access_log off;
    log_not_found off;
  }
 
  location ~ \.php$ {
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/run/php/php7.2-fpm.sock;
    fastcgi_cache_bypass $skip_cache;
    fastcgi_no_cache $skip_cache;
    fastcgi_cache EXAMPLECOM1;
    fastcgi_cache_valid 60m;
    include fastcgi_params;
  }
  ## Block file injections
    set $block_file_injections 0;
    if ($query_string ~ "[a-zA-Z0-9_]=http://") {
        set $block_file_injections 1;
    }
    if ($query_string ~ "[a-zA-Z0-9_]=(\.\.//?)+") {
        set $block_file_injections 1;
    }
    if ($query_string ~ "[a-zA-Z0-9_]=/([a-z0-9_.]//?)+") {
        set $block_file_injections 1;
    }
    if ($block_file_injections = 1) {
        return 403;
  }
  ## Block SQL injections
    set $block_sql_injections 0;
    if ($query_string ~ "union.*select.*\(") {
        set $block_sql_injections 1;
    }
    if ($query_string ~ "union.*all.*select.*") {
        set $block_sql_injections 1;
    }
    if ($query_string ~ "concat.*\(") {
        set $block_sql_injections 1;
    }
    if ($block_sql_injections = 1) {
        return 403;
  }
  ## Block common exploits
    set $block_common_exploits 0;
    if ($query_string ~ "(<|%3C).*script.*(>|%3E)") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "proc/self/environ") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "mosConfig_[a-zA-Z_]{1,21}(=|\%3D)") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "base64_(en|de)code\(.*\)") {
        set $block_common_exploits 1;
    }
    if ($block_common_exploits = 1) {
        return 403;
  }
}

You will need to make a file in /etc/nginx/sites-available for each site you plan to host on this server wit the appropriate modifications above. Make sure all relevant letsencrypt parts are commented out for now and we will uncomment those later.

Now remember those three lines at the top I said to copy? Open up /etc/nginx/nginx.conf we’re pasting that there along with a couple extra settings. In the http block of the config file, you will paste them in. This is how the beginning of the file looks for me as we also added rate limiting to the zone.

 http {
        ##
        # EasyEngine Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 30;
        types_hash_max_size 2048;

        server_tokens off;
        reset_timedout_connection on;
        # add_header X-Powered-By "EasyEngine";
        add_header rt-Fastcgi-Cache $upstream_cache_status;

        # Limit Request
        limit_req_status 403;
        limit_req_zone $binary_remote_addr zone=ex1:10m rate=2r/s;
        limit_req_zone $binary_remote_addr zone=ex2:10m rate=2r/s;
        limit_req_zone $binary_remote_addr zone=ex3:10m rate=2r/s;

        # Proxy Settings
        # set_real_ip_from      proxy-server-ip;
        # real_ip_header        X-Forwarded-For;

        fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
        fastcgi_cache_use_stale error timeout invalid_header http_500;
        fastcgi_cache_key "$scheme$request_method$host$request_uri";
        fastcgi_read_timeout 300;
        client_max_body_size 100m;

Now we need to make symlinks from the sites-available to the sites-enabled. This lets Nginx know these are going to be used

sudo ln -s /etc/nginx/sites-available/example1.com /etc/nginx/sites-enabled/example1.com
sudo ln -s /etc/nginx/sites-available/example2.com /etc/nginx/sites-enabled/example2.com
sudo ln -s /etc/nginx/sites-available/example3.com /etc/nginx/sites-enabled/example3.com

Now test your configs and check for errors

sudo nginx -t

Once that is all done, for the site and nginx config files. Point your domains at the server IP address via your registrar, Cloudflare, or etc. Let’s place a test file in each of the web folders, and restart nginx, then get on to provisioning let’s encrypt for each site.

echo "<?php phpinfo();" | sudo tee /var/www/html/example1.com/index.php > /dev/null
echo "<?php phpinfo();" | sudo tee /var/www/html/example2.com/index.php > /dev/null
echo "<?php phpinfo();" | sudo tee /var/www/html/example3.com/index.php > /dev/null
sudo service nginx restart

If everything went well all domains should pull up PHP info.

Let’s Encrypt your transport:

Now just follow the instructions in the prior article to do the initial steps, and provided you entered your domains correctly into the Nginx config file, certbot should find and install certificates for all of the domains. Make sure to pick a reliable email for alerts from letsencrypt.

If for some reason certbot can’t find them or you want an SSL for another domain that is pointed at your server you can generate a certificate by using “-d domain.tld” for all the domains you want like so, and bare in mind www.example.tld and example.tld are considered two different domains, so you need to include both in the certificate you generate along with any other subdomains.

 sudo certbot -d example.tld -d www.example.tld

Now all of the other Let’s Encrypt settings should be fine provided you followed the original guide. Also remember all those lines we commented out that were related to https, let’s encrypt, and TLS? Go back to your site configs and uncomment those and then restart the nginx server.

sudo service nginx restart

Now provided everything went well with Nginx and Let’s Encrypt all of your sites should be showing as encrypted. Now it’s time to setup WordPress. Simply go back to the original guide and just cd into each directory, using the wp-cli commands and such for each site and you should be golden!

 

Make a super fast and lightweight WordPress on Ubuntu 18.04 with PHP 7.2, Nginx, and MariaDB

I’ve been building servers for a long while based on the ideas I learned a few years ago from morphatic.com, however I wanted to move on PHP 7.2 and I also wanted to begin a server migration project to have Beinglibertarian.com, of which I am the CTO, also host our newest members think-liberty.com and rationalstandard.com since they really liked the speed of our WordPress stack. This is a wordpress stack we will build based on Ubuntu 18.04, Nginx, MariaDB, and PHP 7.2. We will even cover setting up lets encrypt. Just a note that I use Mailgun to deliver the emails, it’s free for up to 10,000 emails per month, and they have an easy to use WordPress plugin that makes it super easy to configure. There is of course far more that you can do to secure your server, and while we aren’t going to cover hosting multiple sites in this tutorial, you can understand how I made such a robust server stack on a Digital Ocean Virtual Private Server. You can use this referral code (https://m.do.co/c/0c6bfeff20b7)to get you a few dollars free with Digital Ocean when you sign up, and it also helps support the costs of my own hosting.

So first things first go to Digital Ocean, which we use, or any other VPS or dedicated server provider you use and get the OS setup and do the basics so we can even login to the box, and point your domain at your server. Once that is done I like to start setting up security, disabling root, and allowing a username to have sudo rights.

For those new to Linux administration you can use these tutorials as to how to add new sudo users and setup ssh keys for for even more security but once that is done let’s move on to the basic security I use.

We definitely want the firewall on our box, but IPtables can be a pain to manage. So let’s begin installing things on the box for security. Including Uncomplicated Firewall which can easily manage firewall rules for us.

Install UFW, Fail2Ban, Nginx, and MariaDB:

In order to use a WordPress plugin for purging the NGINX cache that I talk about below, you have to install a custom version of NGINX. MariaDB is a drop-in replacement for MySQL. You can read about why people think it’s better, but from what I have mostly noticed is that it is incredibly fast compared to MySQL. The MariaDB website has a convenient tool for configuring the correct repositories in your Ubuntu distro. From the command line:

sudo apt update 
sudo apt dist-upgrade -y 
sudo apt install ufw fail2ban
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3050AC3CD2AE6F03
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/rtCamp:/EasyEngine/xUbuntu_18.04/ /' >> /etc/apt/sources.list.d/nginx.list"
sudo apt update
sudo apt install nginx-custom
sudo ufw limit ssh
sudo ufw allow 'Nginx Full'
sudo ufw enable
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8 sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el] https://mirrors.evowise.com/mariadb/repo/10.3/ubuntu bionic main' 
sudo apt install mariadb-server

When the following screen comes up, make sure you provide a good secure password that is different from the password you used for your user account.

Next, lock down your MariaDB instance by running:

sudo mysql_secure_installation

Since you’ve already set up a secure password for your root user, you can safely answer “no” to the question asking you to create a new root password. Answer “Yes” to all of the other questions. Now we can set up a separate MariaDB account and database for our WordPress instance. At the command prompt type the following:

sudo mysql -u root -p

Type in your password when prompted. This will open up a MariaDB shell session. Everything you type here is treated as a SQL query, so make sure you end every line with a semicolon! This is very easy to forget. Here are the commands you need to type in to create a new database, user, and assign privileges to that user:

MariaDB [(none)]> CREATE DATABASE mywpdb DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
MariaDB [(none)]> GRANT ALL ON mywpdb.* TO 'mywpdbuser'@'localhost' IDENTIFIED BY 'securepassword';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> quit

Note that although it’s customary to use ALL CAPS to write SQL statements like this, it is not strictly necessary. Also, where I’ve used mywpdb, mywpdbuser, and securepassword make sure to put your own choices. The last thing you want is someone knowing you had an easy to guess database name and password.

Fail2Ban Installation and Setup:

Fail2Ban is an intrusion prevention software framework that protects servers from brute-force attacks. It’s probably one of my all time favorite security tools as it’s very robust and flexible. In order to make modifications to Fail2Ban we need to make a local copy that we can modify so we can preserve changes.

sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Now open the newly made file so we can edit it

sudo nano /etc/fail2ban/jail.local

I recommend reading this guide from Digital Ocean on Fail2Ban with Nginx and follow the tutorial to setup and activate the following jails

  1. Change Defaults per tutorial
  2. nginx-http-auth
  3. nginx-badbots
  4. nginx-nohome
  5. nginx-noproxy

Also make sure the SSH and SSH-DDoS jails are enabled, and consider enabling the recidive filter. I also recommend adding a jail for WordPress via the WP Fail2ban plugin for wordpress, which can be easily installed and activated by following their instructions.

Installing and Configuring PHP 7.2:
Since we are using Ubuntu 18.04, PHP 7.2 is the default for PHP so simply run in terminal

sudo apt install -y zip unzip php-fpm php-mysql php-xml php-gd php-mbstring php-zip php-curl 

Just an FYI that this also installs the MySQL, XML, Curl and GD packages so that WordPress can interact with the database, support for XMLRPC (required for Jetpack), and also automatically cropping and resize images. It also installs zip/unzip because I use zip and unzip in some of my own backup plugins and tools.

I also like to tweak the php.ini settins to allow for more memory and larger file sizes. So let’s open /etc/php/7.2/fpm/php.ini.

sudo nano /etc/php/7.2/fpm/php.ini

You can make this faster by using the search function with CTRL + W and then typing what you’re looking for. I usually recommend increasing the post_max_size from the default 8MB, upload_max_filesize from the default 2MB, and memory_limit from it’s default. I generally set all of mine to 128MB and 256MB respectively

Now let’s restart PHP

sudo service php7.2-fpm restart

Now we need to tell Nginx to use PHP7.2-fpm, so let’s open up our configuration file for our default site.

sudo nano /etc/nginx/sites-available/default

We need to edit the file so that it looks like below, but change example.com and www.example.com to your TLD that you are using with your server.

server {
  listen 80 default_server;
  listen [::]:80 default_server;
 
  root /var/www/html;
  index index.php index.html;
 
  server_name example.com www.example.com;
 
  location / {
    try_files $uri $uri/ =404;
  }
 
  location ~ \.php$ {
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/run/php/php7.2-fpm.sock;
  }
 
  location ~ /\.ht {
    deny all;
  }
}

Save and exit, then restart Nginx to apply changes

 sudo service nginx restart

Now it’s time to test it all out and make sure this is all working properly. So let’s make a sample PHP file in your /var/www/html folder called index.php

echo "<?php phpinfo();" | sudo tee /var/www/html/index.php > /dev/null

Now open up your web browser and go to http://SERVER.IP.ADDRESS.HERE (e.g. http://192.168.1.1), and you should see something like this.

Awesome sauce, we’re starting to see it finally coming together! You officially have made a Linux, Nginx, MariaDB, and PHP stack aka a LEMP stack. Honestly at this point you can serve up just about any LEMP needs you have for any software such as NextCloud or more. Let’s move on, the goal line is within sight!

Encrypt! Encrypt! Encrypt! Let’s Encrypt, with TLS/SSL Certificates from letsencrypt.org

This is pretty straight forward but I recommend reading Digital Ocean’s tutorial on setting up and securing nginx, to fully grasp what we are doing here. So let’s install letsencrypt. Before you used to have to add a PPA, update, and install certbot, but it’s in the main Ubuntu repo these days so one command to install letsencrypt, and another to install the certificates to the domains defined in the /etc/nginx/sites-available config file as we have done earlier.

 sudo apt install -y letsencrypt
sudo certbot --nginx

Now just follow the instructions, and provided you entered your domains correctly into the Nginx config file, certbot should find and install certificates for all of the domains. Make sure to pick a reliable email for alerts from letsencrypt.

If for some reason certbot can’t find them or you want an SSL for another domain that is pointed at your server you can generate a certificate by using “-d domain.tld” for all the domains you want like so, and bare in mind www.example.tld and example.tld are considered two different domains, so you need to include both in the certificate you generate along with any other subdomains.

 sudo certbot -d example.tld -d www.example.tld

Now we need to edit the Nginx snippet created by certbot.

 sudo nano /etc/letsencrypt/options-ssl-nginx.conf

Edit it so it looks like below, althought the top few lines are created by certbot, so add the ones below to enhance our security profile.

 
# automatically added by Certbot
ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA38$
 
# MANUALLY ADD THESE
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;

Now save and exit.

It’s extremely important to renew your Letsencrypt certificates every couple months at least as they expire every 90 days. So we need to setup a cron job to check for renewals often, and renew the certificates automatically. So lets edit the crontab as root

sudo crontab -e

Add the following lines so we can have it check and autorenew certificates every Monday.

30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log
35 2 * * 1 /bin/systemctl reload nginx

Now lets save that and run certbot in a dry run to see if renewals will work.

sudo certbot renew --dry-run

Now it’s time to install WordPress
Personally I like to install wp-cli and then finish it up in the WebUI. I love WP CLI as it is a command line interface to administrate wordpress. So if a worst case happens and you say lock yourself out and can’t reset the password, want to install or deactivate a plugin that isn’t allowing wordpress to work, or more it can do it. It’s extremely powerful and handy to have on a system regardless. So let’s install that, then have it download the WordPress files.

curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp
cd /var/www/html
sudo wp core download

Now go to your domain, and you can run the WordPress quick install, it’s straightforward just enter all the information that it asks for, once that is done hop back into terminal and let’s use WP CLI to install some plugins easily that I recommend with this setup to integrate with the caching on the OS, Fail2Ban, and more. If not planning to use mailgun I recommend gmail-smtp if you use gmail. But do not install both Mailgun and Gmail-Smtp, pick one. I also added Cloudflare because I use that, and it’s a free CDN as well as proxy to help avoid DDoS attacks. They have a free plan that is great.I also added WP-Sweep a great database cleaner tool, and Updraft plus, one of the best WordPress backup software. Plus iThemes Security which I really like for it’s many free security features.

sudo wp plugin delete hello --allow-root
sudo wp plugin install nginx-helper --allow-root
sudo wp plugin activate nginx-helper --allow-root
sudo wp plugin install mailgun --allow-root
sudo wp plugin activate mailgun --allow-root
sudo wp plugin install jetpack --allow-root
sudo wp plugin activate jetpack --allow-root
sudo wp plugin install gmail-smtp --allow-root
sudo wp plugin activate gmail-smtp --allow-root
sudo wp plugin install cloudflare --allow-root
sudo wp plugin activate cloudflare --allow-root
sudo wp plugin install wp-sweep --allow-root
sudo wp plugin activate wp-sweep --allow-root
sudo wp plugin install updraftplus --allow-root
sudo wp plugin activate updraftplus --allow-root
sudo wp plugin install wp-better-security --allow-root
sudo wp plugin activate wp-better-security --allow-root

Mailgun Setup
You’ll need to setup an account. I recommend despite their recommendation, making your domain the same as your regular domain, do not subdomain it. The reason why is you can up a forwarding rule so if say someone emails you at [email protected] it could look professional and forward to a gmail account per se. After you’ve set up your domain at Mailgun, go to Settings > Mailgun from the WP dashboard, copy and paste in your Mailgun domain name and API key, and then click “Save Changes” to get it set up. Click “Test Configuration” to make sure it is working. You may also want to use the Check Email plugin just to make sure that emails are being sent correctly.

GMail SMTP Setup
If you setup the GMail SMTP servers in your DNS according to this guide, you’ll want to have installed the GMail SMTP plugin for WP. The setup for this plugin is somewhat involved. I strongly urge you to follow the instructions on their documentation site.

Time to Optimize and Secure the WordPress

Here are some tips for securing and optimizing your wordpress. Simply replace the content of /etc/nginx/sites-available/default with the following and make sure any reference of “example.com” reflects your actual domain and tld.

fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
 
server {
  listen 80 default_server;
  listen [::]:80 default_server;
  listen 443 ssl http2 default_server;
  listen [::]:443 ssl http2 default_server;
  ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
 
  # force redirect to HTTPS from HTTP
  if ($scheme != "https") {
    return 301 https://$host$request_uri;
  }
 
  client_max_body_size 256M;
  root /var/www/html;
  index index.php index.html;
 
  server_name example.com www.example.com;
 
  set $skip_cache 0;
 
  if ($request_method = POST) {
    set $skip_cache 1;
  }
 
  if ($query_string != "") {
    set $skip_cache 1;
  }
 
  if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
    set $skip_cache 1;
  }
 
  if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
    set $skip_cache 1;
  }
 
  location ~ /purge(/.*) {
    fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
  }
 
  location / {
    try_files $uri $uri/ /index.php?$args;
    limit_req zone=one burst=50;
  }
 
  # Turn off directory indexing
  autoindex off;
 
  # Deny access to htaccess and other hidden files
  location ~ /\. {
    deny  all;
  }
 
  # Deny access to wp-config.php file
  location = /wp-config.php {
    deny all;
  }
 
  # Deny access to revealing or potentially dangerous files in the /wp-content/ directory (including sub-folders)
  location ~* ^/wp-content/.*\.(txt|md|exe|sh|bak|inc|pot|po|mo|log|sql)$ {
    deny all;
  }
 
  # Stop php access except to needed files in wp-includes
  location ~* ^/wp-includes/.*(?<!(js/tinymce/wp-tinymce))\.php$ {
    internal; #internal allows ms-files.php rewrite in multisite to work
  }
 
  # Specifically locks down upload directories in case full wp-content rule below is skipped
  location ~* /(?:uploads|files)/.*\.php$ {
    deny all;
  }
 
  # Deny direct access to .php files in the /wp-content/ directory (including sub-folders).
  # Note this can break some poorly coded plugins/themes, replace the plugin or remove this block if it causes trouble
  location ~* ^/wp-content/.*\.php$ {
    deny all;
  }
 
  location = /favicon.ico {
    log_not_found off;
    access_log off;
  }
 
  location = /robots.txt {
    access_log off;
    log_not_found off;
  }
 
  location ~ \.php$ {
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/run/php/php7.2-fpm.sock;
    fastcgi_cache_bypass $skip_cache;
    fastcgi_no_cache $skip_cache;
    fastcgi_cache WORDPRESS;
    fastcgi_cache_valid 60m;
    include fastcgi_params;
  }
  ## Block file injections
    set $block_file_injections 0;
    if ($query_string ~ "[a-zA-Z0-9_]=http://") {
        set $block_file_injections 1;
    }
    if ($query_string ~ "[a-zA-Z0-9_]=(\.\.//?)+") {
        set $block_file_injections 1;
    }
    if ($query_string ~ "[a-zA-Z0-9_]=/([a-z0-9_.]//?)+") {
        set $block_file_injections 1;
    }
    if ($block_file_injections = 1) {
        return 403;
  }
  ## Block SQL injections
    set $block_sql_injections 0;
    if ($query_string ~ "union.*select.*\(") {
        set $block_sql_injections 1;
    }
    if ($query_string ~ "union.*all.*select.*") {
        set $block_sql_injections 1;
    }
    if ($query_string ~ "concat.*\(") {
        set $block_sql_injections 1;
    }
    if ($block_sql_injections = 1) {
        return 403;
  }
  ## Block common exploits
    set $block_common_exploits 0;
    if ($query_string ~ "(<|%3C).*script.*(>|%3E)") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "proc/self/environ") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "mosConfig_[a-zA-Z_]{1,21}(=|\%3D)") {
        set $block_common_exploits 1;
    }
    if ($query_string ~ "base64_(en|de)code\(.*\)") {
        set $block_common_exploits 1;
    }
    if ($block_common_exploits = 1) {
        return 403;
  }
}

Then while we are at it let’s make sure /etc/nginx/nginx.conf has an additional parameter we set. So open up it up with nano

sudo nano /etc/nginx/nginx.conf

Then look to see if the following block is there under the http section, and make sure it refers to zone one.

http {
    limit_req_zone  $binary_remote_addr  zone=one:10m   rate=2r/s; 

This config file will take advantage of the advanced caching capabilities of our custom version of NGINX. It will also prevent visitors from accessing files that they shouldn’t be. This also adds some configurations to block SQL and file injection attacks, as well as blocking common exploits. Plus we also added some rate limiting so it can help prevent a Denial of Service attack. The combined effect will be to make your site faster and more secure.

Admin’s Have to get Alerts. Set those Admin Emails Up!

Sometimes things happen and you need to know when they happen. So we need to setup email alerts, and while there are a number of ways to do this, this is the best way I recommend to less advanced Linux users. The two ways here will either route through Mailgun, or Gmail depending on what you did earlier will determine what you will do right now. It is based on this tutorial from the EasyEngine folks. First, install the necessary packages. When prompted about your server type, select “Internet Site”, and for your FQDN, the default should be acceptable. Then open the config file for editing:

sudo apt install -y postfix mailutils libsasl2-2 ca-certificates libsasl2-modules
sudo nano /etc/postfix/main.cf

We’ll need to edit the “mydestination” property and add a few more, but we can leave the rest as their defaults.

mydestination = localhost.$myhostname, localhost
relayhost = [smtp.mailgun.org]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes

If you’re using Gmail as your SMTP server, edit it slightly to look like the following

mydestination = localhost.$myhostname, localhost
relayhost = [smtp.gmail.com]:465
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes
smtp_tls_wrappermode = yes
smtp_tls_security_level = encrypt

Now save that file and it’s time to make a file to store our SMTP credentials

 sudo nano /etc/postfix/sasl_passwd

Now add one of the following single lines, only use one of them, and only pick the one you need for Mailgun or Gmail. Where “PASSWORD” is, of course put your actual password”

[smtp.mailgun.org]:587 [email protected]:PASSWORD
OR
[smtp.gmail.com]:465 [email protected]:PASSWORD

You’ll have to get the password for the postmaster account from your Mailgun dashboard. The password for the GMail example should be the password for the email address used. Next we need to lock down this file and tell postfix to use it by running the following:

$ sudo chmod 400 /etc/postfix/sasl_passwd
sudo postmap /etc/postfix/sasl_passwd
cat /etc/ssl/certs/thawte_Primary_Root_CA.pem | sudo tee -a /etc/postfix/cacert.pem

Now it’s testing time

  sudo /etc/init.d/postfix reload
echo "Test mail from postfix" | mail -s "Test Postfix" [email protected]

If everything went perfect, you’ll receive an email from the server at the address in the last line. Also you can check the mailgun logs to see if it routed through their servers.

FINISH HIM! Auto Updating the server
So we need to make sure our server it automatically applying security updates for obvious reasons. So now we need to enable auto updates for apt.

sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

After editing the file it should look like this

Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}-security";
        "${distro_id}:${distro_codename}-updates";
        "${distro_id}ESM:${distro_codename}";
};
Unattended-Upgrade::Mail "[email protected]";
//Unattended-Upgrade::MailOnlyOnError "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "02:00";

This will tell the server to automatically apply security and regular updates, email the admin when updates are done, automatically remove unused dependencies, automatically reboot if necessary, and reboot at 2AM if necessary. But now we need to edit the 10periodic to enable some options.

sudo nano /etc/apt/apt.conf.d/10periodic

Once done it should look like this

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

What that tells it to do is run “apt update” to pull new packages, download packages that are available for update, automatically clean package installers weekly, and enable the unattended upgrade we configured prior.

Finally lets do one last update and clean the server up for it’s maiden voyage.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get autoclean
sudo reboot

Another thing that maybe useful is adding a swapfile, I prefer to instead go to a larger server with more RAM as a Swap file isn’t as ideal as more RAM, but better than nothing if you absolutely need it. Digital Ocean has a great tutorial here.

Conclusion:
This was a bit of a longer tutorial, and there is a whole lot more you can do from additional wordpress plugins, to a CDN like Cloudflare which can really speed up the site, additional security from Port Scan Attack Detector (PSAD), additional blocklists, and more. I hope to cover an addition to this tutorial in the future to detail how I got multiple website on the same box using a slightly modified version of this stack

Announcing My New Project for Cryptocurrency: Liberty Wallet

So I am working on a project which I think will benefit all cryptocurrency users. I saw one of my friends trusted one of those third party wallets and basically had his life savings taken away because ultimately your money depends on how trustworthy they are. Which got me thinking about something.
Ok so here is the long and short of my idea:
I am looking to form a project I call, “Liberty Wallet.” So I’m going to work on a project soon to create a software package for a very secure easy to deploy setup for a laptop which will act as a dedicated cryptocurrency wallet to be used optionally in conjunction with a hardware wallet or merely by itself. It will be an easy set of tools already out there, and a way to lock down the whole OS for security.
All code will be free, Libre, and open source it will not cost any money but it will be funded by my own money and donations (you are free to donate if you like).
The concept is simple:
Not all cryptocurrency is able to be stored on a hardware wallet as it has to be coded into a hardware wallet and cryptocurrencies come out rather fast compared so new wallets are added all the time.
Unfortunately most people put a wallet on a third party service you have to trust or on a general purpose machine which is rather insecure as it has a wide attack surface. One wrong website visited with malware looking for crypto and it’s gone.
So my solution is to create a set of tools, and eventually a dedicated distribution of Linux based on Arch and/or Ubuntu which can run on a dedicated PC with an x86 CPU (AMD or Intel) which is encrypted and has tools to create an encrypted back up via Veracrypt and a cloud provider or external drive (HDD or thumb drive)
So imagine you have a small, low power machine, dedicated to managing your cryptocurrencies which can be backed up, that is encrypted, and has has a very small attack surface because it would be hardened.
Furthermore I will use the Ledger Nano S in ways to help lock my device and add additional security as an option.

The project will also include prior features I implemented in some prior mining projects which created bitcoin paper wallets and other things. So I will look at ways to create paper wallets within the system both online and offline. So you can offload crypto onto QR codes.

The project will utilize already existing wallets for many of the cryptocurrencies and allow users to add their own wallets for new cryptocurrencies as they come out.

This way you have ultimate control over your own cryptocurrency.

Eventually I would like to have a team of developers freely contributing code to improve the project and when I get it to a stable or at least decently feature filled state I will announce it via the proper channels to the world through Being Libertarian (which I am the CTO of)

All I would really need for development is two refurbished low power laptops (found a Dell 2120 on Newegg for $80) and a couple SSD upgrades (also $80 each). One laptop for Ubuntu development and other for Arch development. I’m going to save up money here and there over the next month towards this but if anyone wants to chip in message me. I will have ALL of the code released under GPL v3 to ensure it will be Free, Libre, and Open Source even if forked where applicable as well as have my progress be completely transparent on Github.

I want to do this as all current solutions that are ideal are usually hand built just by people like me who keep it to ourselves. I want to simplify the process so someone can grab a device off the shelf, have a walkthrough and guide, and easily set it up themselves on their own hardware whether it be a tiny netbook as I want, or a whole separate laptop or desktop.

The first step is to create scripts to deploy an Ubuntu or Ubuntu derivative of it to be locked down and install some wallets. Along with a walkthrough and guide to deploy an encrypted Ubuntu setup

The second step is to create a full distribution in Arch and even an Ubuntu format possibly which installs with LVM and LUKS for ultimate encryption to prevent unauthorized access in person.

The third step is to create a full setup for a Raspberry Pi to operate on something like Noodle Pi or Pi Laptop for an ultra portable and secure setup.

Thoughts? Comments? Concerns? Want to find out how to help out?

If anyone wants to help me get the hardware for the Liberty Wallet project you can send Ethereum and Litecoin to here. I’m only trying to raise a total of $320-400 shipping for all the hardware required to develop on for the Ubuntu and Arch side. Eventually I will develop a Raspberry Pi version as well once the x86 Arch and Ubuntu versions are developed and looking to build it on the Noodle Pi.
Bitcoin:
1zuecfedVxrmzWrEZAuBu8HeTd1MzDpme

Litecoin: LSXuvtb2qexMBWkf7GJXC5qdypLCsEKeC9

Ethereum: 0x8589aAa4A016402780Da6E7e5c958418e2e2b2f5

Apologies for being away so long

I had some pressing matters with my team at Being Libertarian LLC, I have since published some new articles there which you can read through this link, https://beinglibertarian.com/author/ganon/. This weekend I plan to be creating a brand new WordPress stack as this server is on the older Ubuntu 14.04 LTS with a GNU/Linux, Apache, MySQL, and PHP (5.7) LAMP stack. The new wordpress stack I will be building will utilize Ubuntu 16.04 GNU/Linux, Nginx (Easy Engine), MariaDB, and PHP (7) (making it a LEMP stack). I have over the past few months grown to love Nginx more than Apache. So I am ditching LAMP stacks for LEMP. It will incorporate SQL Injection filters, Cloudflare DNS Proxy and DDoS mitigation, Nginx caching, amongst many other things. Yes I will be putting forth a tutorial on how to do what I will be making. I also hate PHPMyAdmin and prefer working directly in shell, so no there will not be tutorial including PHPMyAdmin. This is in part due to my perception of one more way for a website to be compromised.

I am not sure whether or not I will cover the SQL database migration as those tutorials are widely found across the website. So hopefully I should have the entirety documented by late next week.

Moving to Xubuntu 16.04 for my personal items

Well my Home Theater PC is just about finished… I just installed samba, enabled openSSH via my SSH key, along with my other goodies. Just doing the final tweaks to smooth out video playback via xorg.conf hopefully all the settings will carry over perfectly as this the same exact GPU and such.
But my Nvidia driver is 361 instead of 358 which I was using when on 14.04.
I am just hoping I don’t have to recompile a kernel just for the “Intel Core 2 or Newer” CPU’s along with modifying the system timer from 250Hz to at least 300Hz to have a number more evenly divisible into 30FPS to equal US NTSC video vs European PAL at 25FPS, if that is the case I will bump it to a 1000Hz timer for faster responsiveness as it’s a desktop, as well as being divisible into 30 and 25 in case I take it over seas. If I have to recompile for the Home Theater I am merely going to recompile from the Ubuntu sources rather than directly from kernel.org. My laptop will be getting a custom kernel from kernel.org though as it doesn’t have the proprietary GPU for smoother video playback

I opted for fresh reinstalls for my laptop and HTPC as I ran into issues with the upgrade and then merely reinstalled all my applications and such through my scripts, as I always keep my /home on a separate partition just in case I have to reinstall things.

I am working on upgrading my servers from 14.04LTS to 16.04LTS as well by Q2 of 2017 after lots of testing. Some servers will probably be rebuilt from scratch. However I am in no rush as we all have until 2019 for the end of life of Ubuntu 14.04 LTS to end.

Internet Naming System to be Privatized

This article was originally published on Being Libertarian reposted here with permission

The Internet… It’s amazing, isn’t it? How one small innovation from the Defense Advanced Research Projects Agency (DARPA), which led to the Internet Protocol (IP) system we use today, was taken by the private sector and thrown in to warp drive, and brought us into a whole new cyber world. There is no denying that the Internet today as we know it, is almost entirely a product of private sector innovation, as they built about 99% on top of the underlying IP model.

So, how exactly does the Internet naming system work? When you enter http://facebook.com in a browser, you get the Facebook homepage. In order for that to happen, the address facebook.com has to be translated into a format that’s understood by the computers around the world which delivered our home page to you. This format is known as an IP address, and for facebook.com, one of these addresses is 66.220.146.36. This is essential for how the Internet operates, and also why one US agency or another has been in charge of the Internet naming system pretty much since its founding, with the role currently falling to the National Telecommunications and Information Administration (NTIA), which is part of the Department of Commerce.

It is amazing how most of the Internet today is controlled by standards bodies such as the Institute of Electrical and Electronics Engineers (IEEE), World Wide Web Consortium (W3C), XMPP Standards Foundation (XSF), and others which are made up of engineers and companies all voluntarily working together to set forth new industry standards so everything is compatible with one another. Slowly, we have been seeing since the creation of the Internet – aside from some bonehead moves by the FCC – that the government is releasing control of the Internet to the private sector. Now the government has finally decided it is time for the Internet naming system to be free from all direct US government control, with all of the control being delegated to a non-profit entity known as the Internet Corporation for Assigned Names and Numbers (ICANN), based in California. The deal was finalized on August 16th by the NTIA. making its final steps to basically choose not to renew it’s contract with ICANN, which it has had since 1998 (the contract between the US government and ICANN was a zero-cost one).

This new era is set to officially begin on October 1st. The most important thing is the handover will not affect the estimated 3.5 billion Internet users. This is because the US role was mostly administrative, rather than hands on, leaving ICANN to do all the actual day-to-day work on behalf of the government. This has not come as a surprise to anyone, as the NTIA voluntarily triggered this course of events back in March of 2014. ICANN has since set up their own various bodies and committees to finalize the transition plan following 33,000 emails and 600 meetings.

This has become a very important post due to Edward Snowden’s revelation of the scope of the US government’s invasion of privacy; which notes concerns with the US government having control over key Internet infrastructures and calls for the Internet to be more globalized for the sake of freedom on the Internet. China and Russia have both called for the system to be overseen by an even bigger government body that might have been worse for us all, the United Nations International Telecommunications Union, which would not be afraid to curb the rights of some to acquiesce to the desires of a few countries with oppressive regimes; such as when they allowed Saudi Arabia, a country with many human rights violations, to head the UN Human Rights Panel.

ICANN being selected is a much better outcome for us all, as private organizations have consistently shown themselves to be more nimble and flexible than a government body with bureaucrats. Once the handover is completed, ICANN, a “multi-stakeholder” non-profit organization whose roster of members includes the likes of tech giant companies and individuals, governments, and other such people or organizations with an interest in controlling the Internet naming system, will take over the reins. The US government itself has even performed a study showing the chances of ICANN being steered by a government with its own agenda to be “extremely remote”.

In conclusion, the beginning of October is when the new era of more freedom on the Internet will be here. We can rest easier knowing the Internet naming system is out of the hands of a single government, or even worse, being at the hands of the highly politicized and polarized United Nations, but rather in the hands of the private sector.

Perspectives: DNC Email Leak

This article was originally published on Being Libertarian reposted here with permission

Being Libertarian Perspectives will serve as a weekly, multi-perspective opinion and analysis piece by members of Being Libertarian’s writing team. Every week the panel, comprised of randomly selected writers, will answer a question based on current events or libertarian philosophy. Managing Editor Dillon Eliassen will moderate and facilitate the discussion.

Perspectives 1

Dillon Eliassen: What do you think is the most shocking or profound tidbit found in the Democratic National Committee email leak?

Alon Ganon: Where to begin? I personally have about 90+ emails uncovered on my blog. We have DNC members shooting a horse for insurance, a lollipop reference as in Lollicon, racism, some homophobic comments, some anti-Semitism sprinkled in when mentioning Yom HaShoah to remember the Holocaust as they were annoyed. The collusion between the media… I could go on, there is just so much. So, what I would say is most shocking is size and scope of how bad it actually all is.

It was a horribly set up network. It appears to be all Windows based using Microsoft Exchange, which Snowden had revealed Microsoft sits on the exploits of and hands on a a silver platter to the NSA, leaving millions vulnerable. So Big Government in a way had a hand in this leak. If they had been using a proper UNIX/UNIX-like system like the majority of the IT world does for network connected services, this could have been avoided. It’s why all of our servers set up by me use GNU/Linux. For example, Windows uses password authentication most often. We use RSA keys that would take the NSA even a little time to crack our server key for administrative access unless they have physical access to my laptop or the encrypted backup. The funny part is both Clinton and the DNC used Microsoft Exchange and that was the Achilles heel in both attacks.

However, I found the most interesting thing about #DNCLeak was actually the after effect. See, they immediately point the finger at the Russians. I have asked dozens of friends of mine in the IT world across the political spectrum, and no one is convinced the Russian government is behind it. However, it’s interesting to note that Clinton is going after Russia saying they are working with Trump when we have confirmation she has received money in exchange for some deals with them. I would also like to note the FBI was so sure of themselves when Sony was breached that it was North Korea. However, it was revealed later may not have been the case at all as it appeared to be an inside job. So how do we not know if maybe it was a disgruntled intern or someone?

Dillon: I also think it was a disgruntled employee. And again, they are shooting themselves in the foot by blaming Russia, because it gives credence to the assertion that Russia went after Hillary’s private server.

Alon: Apparently the only “evidence” they have of it being Russia, to my knowledge, is an IP address which we should note the Supreme Court says is not enough for a warrant, and some metadata in a document in Russian. That’s hardly a smoking gun.

If anything this situation has revealed the IT department of the Democrats to be as incompetent as their politicians they support.

If I were to sum up this whole situation in one single word as an IT person, it would be “incompetence.”

Dillon: I enjoy the emails sent to Chuck Todd to get him to intercede on behalf of Debbie Wasserman Schultz, Hillary Clinton and the DNC to get MSNBC Morning Joe host Mika Brzezinski to stop criticizing them for being unfair to Bernie Sanders. I don’t believe Todd actually confirmed to someone in the Hillary campaign that he reached out to Brzezinski, so he might be in the clear as far as journalistic ethics go. And I don’t think it’s that terrible that DWS and her minions approached Todd to act on their behalf. What I wonder is why wouldn’t they ask to respond to Brzezinski’s allegations themselves by appearing on Morning Joe, or going on Meet The Press? Also, I think it’s foolish and risky on the campaign/DWS’s part, because what if Brzezinski got all bent out of shape and did a segment on Morning Joe saying she was approached by DWS and Hillary to not be so critical of them? Journalists hate being told what they can and can’t say, and they have a platform to antagonize their antagonizers. It would be like kicking a hornet’s nest!

Brandon Kirby: I was concerned about the philosophical implications to the way people think; the media’s involvement with the Democratic Party was alarming. I’ve seen stories on the media of situations I was close to that were false narratives that perpetuating biases rather than reality. I watched a 6 minute story that did this, then I multiplied that by 10 to imagine (I’ll admit my thought-experiment was imagination rich and empirical data poor) how much false narratives were being consumed by the viewer in an hour program, and then again by 365 and it’s a horrifying prospect to think of people walking around in society guided by these falsehoods. It’s similar to Plato’s cave where they’re seeing a shadowing blur of reality constructed by a bias. As horrifying as that was, it became more horrifying knowing the politicians are the ones creating the narrative. It’s nothing short of an Orwellian nightmare.

John Engle: I think the main revelation will be to wake progressives up to the bad faith in which the DNC operates. It’s a process that has been starting, and the hard core of the Sandinista movement seems to have seen it pretty clearly at the convention. The news media, film, TV, etc. all contribute to the notion that the Right operates in bad faith, more interested in the dollars from rich corporate interests than in actually serving the people. They portray the Democrats and the Left, on the other hand, as being good faith actors. When something goes wrong policy-wise, it is chocked up to unintended consequences rather than malice. What these emails reveal clearly is what anyone who follows politics understands: Both sides are entrenched interests that are largely interested in perpetuating themselves and their privileges. The act of public service is the secondary value at best.

Ni Ma: Charles Krauthammer suggested that Trump’s statement asking Russia to find Hillary’s emails may have been a trap, since Clinton claims those were all private. So there would be no implication to national security if they were all private. Yet Democrats complain about Trump jeopardizing national security. Not sure if there’s validity to it, but I found it to be an interesting hypothesis.

John: I’ve seen that as well. Even if he didn’t plan it that way, it will have that impact for him. Can’t be a better result from Trump’s perspective, because he will be able to turn it on them so easily. She freaks out over his one off the cuff remark and thinks we plebs should shut up about the hundreds of deleted emails.

Alon: I will say this, as an IT person. This has been the best comedy show for me. I have actually been using the DNC Leak as an example for my clients on the weaknesses of Microsoft software. Unfortunately as it was pointed out to me, the US Government seems to have a crony deal with Microsoft that they require Microsoft software on their computers and contractors computers. To me this is a blatant example of how Crony capitalism damages everyone.

I would like to see the US government actually read Eric S. Raymond’s, Cathedral and the Bazaar. Because they need to implement it properly. Because relying on a corporation with a dedicated team of a few hundred to fix all issues is clearly showing its strain. Linus’ law named after Linus Torvalds the founder of the Linux kernel, states “given enough eyeballs, all bugs are shallow”; or more formally: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone.” Per Wikipedia but is a well cited law in the tech community to the flaw of Microsoft software or really any proprietary softwares
I cite it further as an example of cronyism damaging Government via proprietary contracts for public non defensive systems. The reason being that defense software is protected via “protection from obscurity.” However public services, are usually a common platform. Therefore a voluntarist structure is more beneficial as we can see in real world practices on Free, Libre, and Open Source Software (FLOSS) E.g. GNU/Linux, Firefox, Bitcoin, WordPress, email, and most fundamental services we rely on but don’t think about in our day to day cyber lives

ctp@slur.dk'

Dear Journalists of #DNCLeak

Dear Journalists of #DNCLeak,
Stop lying about that it was the Russians who cracked the security. First of all let me give you an IT lesson 101 in spoofing your location. We have what is called a Virtual Private Network (VPN) which allows us to have a virtual presence in a local network. It creates an encrypted tunnel to a location and proxies your information through there. So geographically it appears as if you are somewhere else. That’s why some of my friends have noticed my location shows as New York City, instead of Ohio where I actually live. So what exactly am I getting at here? Cyber security is tricky business and we can’t just trust that because it appears to possibly be from Russia, that is in fact Russia. So the mere fact they immediately began pointing the finger at Russia suggests this is spin rather than factual. This has been proven by the constant touting of “suspicion” as fact in many of the articles.

If we recall Sony was cracked into and even the FBI suspected North Korea of doing it (1), as it appeared to have been traced back through China, which is where North Korea proxies their internet through. However as Time magazine wrote, it turns out that evidence suggests after deep investigation they were completely wrong, and it seemed to be a disgruntled employee (2).

To be completely honest the most disgusting fact about this entire situation, is it shows how corrupt the media still is. We are seeing this as they are covering up the story, and spinning it, rather than using the emails there to show how bad the DNC and Hillary Clinton are. But then again the media has always thrown the left a soft pitch every single time historically compared to the right. I say this as someone who is neither a Republican or a Democrat. Please for once in your careers be honest to the people and report the story as it happened, not as you want them to hear how it happened.

Sincerely,
Alon Ganon
An Honest Journalist and CTO of Being Libertarian LLC

(1) http://www.usatoday.com/story/news/nation-now/2014/12/18/sony-hack-timeline-interview-north-korea/20601645/
(2) http://time.com/3649394/sony-hack-inside-job-north-korea/