Moving to Xubuntu 16.04 for my personal items

Well my Home Theater PC is just about finished… I just installed samba, enabled openSSH via my SSH key, along with my other goodies. Just doing the final tweaks to smooth out video playback via xorg.conf hopefully all the settings will carry over perfectly as this the same exact GPU and such.
But my Nvidia driver is 361 instead of 358 which I was using when on 14.04.
I am just hoping I don’t have to recompile a kernel just for the “Intel Core 2 or Newer” CPU’s along with modifying the system timer from 250Hz to at least 300Hz to have a number more evenly divisible into 30FPS to equal US NTSC video vs European PAL at 25FPS, if that is the case I will bump it to a 1000Hz timer for faster responsiveness as it’s a desktop, as well as being divisible into 30 and 25 in case I take it over seas. If I have to recompile for the Home Theater I am merely going to recompile from the Ubuntu sources rather than directly from My laptop will be getting a custom kernel from though as it doesn’t have the proprietary GPU for smoother video playback

I opted for fresh reinstalls for my laptop and HTPC as I ran into issues with the upgrade and then merely reinstalled all my applications and such through my scripts, as I always keep my /home on a separate partition just in case I have to reinstall things.

I am working on upgrading my servers from 14.04LTS to 16.04LTS as well by Q2 of 2017 after lots of testing. Some servers will probably be rebuilt from scratch. However I am in no rush as we all have until 2019 for the end of life of Ubuntu 14.04 LTS to end.

Internet Naming System to be Privatized

This article was originally published on Being Libertarian reposted here with permission

The Internet… It’s amazing, isn’t it? How one small innovation from the Defense Advanced Research Projects Agency (DARPA), which led to the Internet Protocol (IP) system we use today, was taken by the private sector and thrown in to warp drive, and brought us into a whole new cyber world. There is no denying that the Internet today as we know it, is almost entirely a product of private sector innovation, as they built about 99% on top of the underlying IP model.

So, how exactly does the Internet naming system work? When you enter in a browser, you get the Facebook homepage. In order for that to happen, the address has to be translated into a format that’s understood by the computers around the world which delivered our home page to you. This format is known as an IP address, and for, one of these addresses is This is essential for how the Internet operates, and also why one US agency or another has been in charge of the Internet naming system pretty much since its founding, with the role currently falling to the National Telecommunications and Information Administration (NTIA), which is part of the Department of Commerce.

It is amazing how most of the Internet today is controlled by standards bodies such as the Institute of Electrical and Electronics Engineers (IEEE), World Wide Web Consortium (W3C), XMPP Standards Foundation (XSF), and others which are made up of engineers and companies all voluntarily working together to set forth new industry standards so everything is compatible with one another. Slowly, we have been seeing since the creation of the Internet – aside from some bonehead moves by the FCC – that the government is releasing control of the Internet to the private sector. Now the government has finally decided it is time for the Internet naming system to be free from all direct US government control, with all of the control being delegated to a non-profit entity known as the Internet Corporation for Assigned Names and Numbers (ICANN), based in California. The deal was finalized on August 16th by the NTIA. making its final steps to basically choose not to renew it’s contract with ICANN, which it has had since 1998 (the contract between the US government and ICANN was a zero-cost one).

This new era is set to officially begin on October 1st. The most important thing is the handover will not affect the estimated 3.5 billion Internet users. This is because the US role was mostly administrative, rather than hands on, leaving ICANN to do all the actual day-to-day work on behalf of the government. This has not come as a surprise to anyone, as the NTIA voluntarily triggered this course of events back in March of 2014. ICANN has since set up their own various bodies and committees to finalize the transition plan following 33,000 emails and 600 meetings.

This has become a very important post due to Edward Snowden’s revelation of the scope of the US government’s invasion of privacy; which notes concerns with the US government having control over key Internet infrastructures and calls for the Internet to be more globalized for the sake of freedom on the Internet. China and Russia have both called for the system to be overseen by an even bigger government body that might have been worse for us all, the United Nations International Telecommunications Union, which would not be afraid to curb the rights of some to acquiesce to the desires of a few countries with oppressive regimes; such as when they allowed Saudi Arabia, a country with many human rights violations, to head the UN Human Rights Panel.

ICANN being selected is a much better outcome for us all, as private organizations have consistently shown themselves to be more nimble and flexible than a government body with bureaucrats. Once the handover is completed, ICANN, a “multi-stakeholder” non-profit organization whose roster of members includes the likes of tech giant companies and individuals, governments, and other such people or organizations with an interest in controlling the Internet naming system, will take over the reins. The US government itself has even performed a study showing the chances of ICANN being steered by a government with its own agenda to be “extremely remote”.

In conclusion, the beginning of October is when the new era of more freedom on the Internet will be here. We can rest easier knowing the Internet naming system is out of the hands of a single government, or even worse, being at the hands of the highly politicized and polarized United Nations, but rather in the hands of the private sector.

Perspectives: DNC Email Leak

This article was originally published on Being Libertarian reposted here with permission

Being Libertarian Perspectives will serve as a weekly, multi-perspective opinion and analysis piece by members of Being Libertarian’s writing team. Every week the panel, comprised of randomly selected writers, will answer a question based on current events or libertarian philosophy. Managing Editor Dillon Eliassen will moderate and facilitate the discussion.

Perspectives 1

Dillon Eliassen: What do you think is the most shocking or profound tidbit found in the Democratic National Committee email leak?

Alon Ganon: Where to begin? I personally have about 90+ emails uncovered on my blog. We have DNC members shooting a horse for insurance, a lollipop reference as in Lollicon, racism, some homophobic comments, some anti-Semitism sprinkled in when mentioning Yom HaShoah to remember the Holocaust as they were annoyed. The collusion between the media… I could go on, there is just so much. So, what I would say is most shocking is size and scope of how bad it actually all is.

It was a horribly set up network. It appears to be all Windows based using Microsoft Exchange, which Snowden had revealed Microsoft sits on the exploits of and hands on a a silver platter to the NSA, leaving millions vulnerable. So Big Government in a way had a hand in this leak. If they had been using a proper UNIX/UNIX-like system like the majority of the IT world does for network connected services, this could have been avoided. It’s why all of our servers set up by me use GNU/Linux. For example, Windows uses password authentication most often. We use RSA keys that would take the NSA even a little time to crack our server key for administrative access unless they have physical access to my laptop or the encrypted backup. The funny part is both Clinton and the DNC used Microsoft Exchange and that was the Achilles heel in both attacks.

However, I found the most interesting thing about #DNCLeak was actually the after effect. See, they immediately point the finger at the Russians. I have asked dozens of friends of mine in the IT world across the political spectrum, and no one is convinced the Russian government is behind it. However, it’s interesting to note that Clinton is going after Russia saying they are working with Trump when we have confirmation she has received money in exchange for some deals with them. I would also like to note the FBI was so sure of themselves when Sony was breached that it was North Korea. However, it was revealed later may not have been the case at all as it appeared to be an inside job. So how do we not know if maybe it was a disgruntled intern or someone?

Dillon: I also think it was a disgruntled employee. And again, they are shooting themselves in the foot by blaming Russia, because it gives credence to the assertion that Russia went after Hillary’s private server.

Alon: Apparently the only “evidence” they have of it being Russia, to my knowledge, is an IP address which we should note the Supreme Court says is not enough for a warrant, and some metadata in a document in Russian. That’s hardly a smoking gun.

If anything this situation has revealed the IT department of the Democrats to be as incompetent as their politicians they support.

If I were to sum up this whole situation in one single word as an IT person, it would be “incompetence.”

Dillon: I enjoy the emails sent to Chuck Todd to get him to intercede on behalf of Debbie Wasserman Schultz, Hillary Clinton and the DNC to get MSNBC Morning Joe host Mika Brzezinski to stop criticizing them for being unfair to Bernie Sanders. I don’t believe Todd actually confirmed to someone in the Hillary campaign that he reached out to Brzezinski, so he might be in the clear as far as journalistic ethics go. And I don’t think it’s that terrible that DWS and her minions approached Todd to act on their behalf. What I wonder is why wouldn’t they ask to respond to Brzezinski’s allegations themselves by appearing on Morning Joe, or going on Meet The Press? Also, I think it’s foolish and risky on the campaign/DWS’s part, because what if Brzezinski got all bent out of shape and did a segment on Morning Joe saying she was approached by DWS and Hillary to not be so critical of them? Journalists hate being told what they can and can’t say, and they have a platform to antagonize their antagonizers. It would be like kicking a hornet’s nest!

Brandon Kirby: I was concerned about the philosophical implications to the way people think; the media’s involvement with the Democratic Party was alarming. I’ve seen stories on the media of situations I was close to that were false narratives that perpetuating biases rather than reality. I watched a 6 minute story that did this, then I multiplied that by 10 to imagine (I’ll admit my thought-experiment was imagination rich and empirical data poor) how much false narratives were being consumed by the viewer in an hour program, and then again by 365 and it’s a horrifying prospect to think of people walking around in society guided by these falsehoods. It’s similar to Plato’s cave where they’re seeing a shadowing blur of reality constructed by a bias. As horrifying as that was, it became more horrifying knowing the politicians are the ones creating the narrative. It’s nothing short of an Orwellian nightmare.

John Engle: I think the main revelation will be to wake progressives up to the bad faith in which the DNC operates. It’s a process that has been starting, and the hard core of the Sandinista movement seems to have seen it pretty clearly at the convention. The news media, film, TV, etc. all contribute to the notion that the Right operates in bad faith, more interested in the dollars from rich corporate interests than in actually serving the people. They portray the Democrats and the Left, on the other hand, as being good faith actors. When something goes wrong policy-wise, it is chocked up to unintended consequences rather than malice. What these emails reveal clearly is what anyone who follows politics understands: Both sides are entrenched interests that are largely interested in perpetuating themselves and their privileges. The act of public service is the secondary value at best.

Ni Ma: Charles Krauthammer suggested that Trump’s statement asking Russia to find Hillary’s emails may have been a trap, since Clinton claims those were all private. So there would be no implication to national security if they were all private. Yet Democrats complain about Trump jeopardizing national security. Not sure if there’s validity to it, but I found it to be an interesting hypothesis.

John: I’ve seen that as well. Even if he didn’t plan it that way, it will have that impact for him. Can’t be a better result from Trump’s perspective, because he will be able to turn it on them so easily. She freaks out over his one off the cuff remark and thinks we plebs should shut up about the hundreds of deleted emails.

Alon: I will say this, as an IT person. This has been the best comedy show for me. I have actually been using the DNC Leak as an example for my clients on the weaknesses of Microsoft software. Unfortunately as it was pointed out to me, the US Government seems to have a crony deal with Microsoft that they require Microsoft software on their computers and contractors computers. To me this is a blatant example of how Crony capitalism damages everyone.

I would like to see the US government actually read Eric S. Raymond’s, Cathedral and the Bazaar. Because they need to implement it properly. Because relying on a corporation with a dedicated team of a few hundred to fix all issues is clearly showing its strain. Linus’ law named after Linus Torvalds the founder of the Linux kernel, states “given enough eyeballs, all bugs are shallow”; or more formally: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone.” Per Wikipedia but is a well cited law in the tech community to the flaw of Microsoft software or really any proprietary softwares
I cite it further as an example of cronyism damaging Government via proprietary contracts for public non defensive systems. The reason being that defense software is protected via “protection from obscurity.” However public services, are usually a common platform. Therefore a voluntarist structure is more beneficial as we can see in real world practices on Free, Libre, and Open Source Software (FLOSS) E.g. GNU/Linux, Firefox, Bitcoin, WordPress, email, and most fundamental services we rely on but don’t think about in our day to day cyber lives'

Nginx Filters for Fail2Ban

When making my other servers I was double checking fail2ban configurations and noticed there is no fail2ban settings for nginx seeing as the webmail runs on it. Not sure if it’s an issue, or anything but I was hoping some other could tell me if I am on the right track, or if it’s not even necessary.

I did this for my email server which runs nginx as the web server.

In the /etc/fail2ban/jail.local

enabled  = true
filter   = nginx-http-auth
port     = http,https
logpath  = /var/log/nginx/error.log
enabled  = true
port     = http,https
filter   = nginx-badbots
logpath  = /var/log/nginx/access.log
maxretry = 2

thencd /etc/fail2ban/filter.d
sudo nano nginx-http-auth.conf

make sure it’s like below


failregex = ^ \[error\] \d+#\d+: \*\d+ user "\S+":? (password mismatch|was not found in ".*"), client: <HOST>, server: \S+, request: "\S+ \S+ HTTP/\d+\.\d+", host: "\S+"\s*$
            ^ \[error\] \d+#\d+: \*\d+ no user/password was provided for basic authentication, client: <HOST>, server: \S+, request: "\S+ \S+ HTTP/\d+\.\d+", host: "\S+"\s*$

ignoreregex =

copy badbots config from apache
sudo cp apache-badbots.conf nginx-badbots.conf

Automatic Filters for IPTables Firewall

So I have been building servers for quite sometime, and if you have been operating servers for a while, you know of attempted intrusions into your server. I have been using Fail2Ban and UFW for quite some time on my Ubuntu servers and they work rather well. I would have them automate the job of managing IPTables, which can be rather cumbersome. Especially with IT people whose specialty may not be firewalls. So I have been looking around for a way to automate my job. My favorite tools thus far include

  1. Fail2Ban – scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs — too many password failures, seeking for exploits, etc. Generally Fail2Ban is then used to update firewall rules to reject the IP addresses for a specified amount of time, although any arbitrary other action(e.g. sending an email) could also be configured. Out of the box Fail2Ban comes with filters for various services (apache, courier, ssh, etc).
  2. UFW – Uncomplicated Firewall, The default firewall configuration tool for Ubuntu is ufw. Developed to ease iptables firewall configuration, ufw provides a user friendly way to create an IPv4 or IPv6 host-based firewall. By default UFW is disabled.
    Gufw is a GUI that is available as a frontend.
  3. – is a free and voluntary service provided by a Fraud/Abuse-specialist, whose servers are often attacked on SSH-, Mail-Login-, FTP-, Webserver- and other services.
    The mission is to report all attacks to the abuse deparments of the infected PCs/servers to ensure that the responsible provider can inform the customer about the infection and disable them.

It’s rather easy to set up these update the IPTables with a simple crontab daily, which will sync with

First become root
sudo -i

Then download the script to cron.daily and make it executable
curl -s > /etc/cron.daily/sync-fail2ban

chmod a+x /etc/cron.daily/sync-fail2ban

Optional but Recommended, Initial run manually:
time /etc/cron.daily/sync-fail2ban

Tomorrow, check your /tmp/iptables.fail2ban.log file to see who’s been blocked.
The lists you get are stored locally for now at /etc/fail2ban/blacklist.*
Your server should now be a little bit more secure with a few thousand new IP addresses added to your IPTables

Install the latest Mozilla Thunderbird or Firefox in Ubuntu GNU/Linux

So I ran into an issue with my Mozilla Thunderbird today when I was finished setting up my new email, contact, and calendar server with Mail-in-a-box. So I go to add the lightning extension for calendars, and low and behold I find out my Thunderbird (the one that came in the the default Xubuntu repos for 14.04LTS) was out of date and not supported by lightning. The Ubuntu repos had version 38.8, but what version was Mozilla at themselves? 45.1 as of this post. So I quickly installed the latest binary but I tend to be forgetful about updates, so I wanted to tie it into the apt package manager so I found a PPA that works.

First if thunderbird is installed remove it, and maybe backup your .thunderbird folder just in case. But you shouldn’t have to worry about losing any data.

sudo apt-get remove -y thunderbird

Next we need to add a new repository called Ubuntuzilla so edit your sources.list. I used nano for this, but feel free to use whatever you like.

sudo nano /etc/apt/sources.list
add to the end
deb all main

or you can do that all with one command
echo -e "\ndeb all main" | sudo tee -a /etc/apt/sources.list > /dev/null

Then grab the keys and update
sudo apt-key adv --recv-keys --keyserver C1289A29
sudo apt-get update

Install your desired package, with one of the following commands:
sudo apt-get install firefox-mozilla-build
sudo apt-get install thunderbird-mozilla-build
sudo apt-get install seamonkey-mozilla-build


How to make “WHOIS” work with new TLD’s e.g. *.xyz, *.online

So I have been building a lot of servers and generally I like to segment them to different domains but whois by default only will work with *.com, *.info, *.net you know the usual TLD’s you think of. But now there are so many new ones I like to scoop up I still want to test my server settings with whois. Well have no fear on my Xubuntu 14.04LTS I use everyday simply create the file “whois.conf” in the /etc/ folder. So use your favorite text editor and paste this file in to get any new TLD resolved.
Open Nano (or whatever text editor you prefer)
sudo nano /etc/whois.conf

Once inside your text editor paste this list (list is very long so I added a read more section you will need to open to see the entire list)

# WHOIS servers for new TLDs (
# Current as of 2015-09-12

Continue reading “How to make “WHOIS” work with new TLD’s e.g. *.xyz, *.online”

How to Make Super Secure Passwords Easily with One Command

We all know when it comes to security, a secure password is always the most important thing. However remembering a complex password is always the toughest part anywhere. Especially when it comes to being a system administrator, our passwords are usually the most vital of anyone in the company. When it comes to telling people they need complex passwords, what always comes to mind is this xkcd comic about passwords.


As the bottom text suggests we have come to the point where it’s hard for us to remember passwords, but easy for computers to guess. So what’s the solution? Well what I do as a GNU/Linux person is use the command already built in to generate super secure passwords using the sha1sum, sha224sum, sha256sum, sha384sum, and sha512sum commands.

First off pick a random word or phrase. Now remember capitalization, spaces, and such will always effect the sum spit out. let’s start with sha1sum which is the shortest, and using the word “password” as our example throughout this tutorial

echo "password" | sha1sum

So we see using the word “password” it spits out the sha1sum of the word, and we now have a very complex password. Now let’s try it with SHA256

echo "password" | sha256sum

So you see as we increase the strength of the sha256sum, the output sum is longer, and with a longer password comes even more security. Now let’s try SHA512

echo "password" | sha512sum

So now we see the output is incredibly long and complex. This is a great way to make incredibly secure passwords.



Redis Caching with OwnCloud

While setting up an OwnCloud server for my company, I couldn’t really find any good way to cache, and with the Ubuntu repos having an old version of Redis, meant of course it couldn’t be used for best performance and stability. I tried installing it manually from some guides I found, and trying to see OwnCloud’s documentation and was last using an Apcu and Redis (older version) combined so I stumbled upon a guide from which actually resolved my issues of an old Redis, and dramatically sped up my server.

This guide is also scripted for an automated install, you can download the script here.

    $~: sudo php5dismod apcu && sudo apt-get purge php5-apcu -y
    $~: rm /etc/php5/mods-available/apcu-cli.ini
    $~: sudo apt-get purge --auto-remove memcached -y && php5dismod memcached
    $~: sudo apt-get update && sudo apt-get install build-essential -y

Continue reading “Redis Caching with OwnCloud”

Tip for OwnCloud

I was building my OwnCloud file storage on Ubuntu 14.04LTS (upgrading to 16.04.1 LTS this summer), which if you haven’t heard of definitely check out it is the most amazing cloud storage program and you control it yourself. It even offers server side encryption, and tons of options to make it how you want it for you or your company. See it at

But I was coming across an .htaccess issue that kept popping up so I modified Apache so much and it still appeared. So I finally stumbled across my fix. Move the OwnCloud data directory out of the default location. So here are the steps I took

Stop apache2

sudo service apache2 stop

Edit config file in default location

sudo nano /var/www/html/owncloud/config/config.php

Change default location to new location

(pick one, I chose /mnt/owncloud_data but put it anywhere you like)

Move the data folder to new location

sudo mv /var/www/html/owncloud/data /new/data/directory/here

if required change permissions

sudo chown -R www-data:www-data /new/data/directory/here

Restart apache2

sudo service apache2 start

Voila .htaccess issue is GONE!