Thursday, December 25, 2014

CLI Mail with Postfix and Fetchmail

CLI Mail with Postfix and Fetchmail



After a few attempts at getting this to work across a couple of different platforms, both on a VPS and on a residential account, I felt it prudent to write a wrap up review of the hurdles I went through that would hopefully provide guidance to others so they don’t have to go through the 20 web articles that only cover part of the overall process. For this article I’m just going to cover getting this done at home, even though it’s more complicated having to skate around all the blockages and BS some ISP’s put on residential accounts. If you’re one who isn’t behind a controlling, micromanaging, BS ISP then your settings might end up being a little different to get the process to fly but the details here will probably either work or get you really close.


First and foremost, everything that is getting done is completely legit from a security perspective. If you’re reading this tutorial with the mindset that you’re going to create your own mail server so you can hide yourself on the internet from responsibility for the purpose of cyber stalking or bullying then:

DON'T 


Many people will wonder why bother with learning email from the command line when there are web clients and desktop clients to handle this work for them. Well, because not every environment offers web access to necessary resources no a graphical interface, yet messaging still needs to occur. People who spend a lot of time working in server environments that do not have graphical engines need to learn how to do email this way. It's also the way email started out for those who are kind of nostalgic about tech and want to play with how things were done "back in the day". Everyone has their own reason for doing things, it's not my place to judge as long as it's not causing harm to others.


It’s expected that Postfix is at least installed but not necessarily configured yet. This tutorial covers the necessary settings to use GMail as our transport. It is also recommended that a person have their own private domain that they can work with as an address since there are certain aspects to Postfix that will require a "personal" touch to fly. After all, this is an "actual" mail MTA/MDA service we are setting up even if it has to use a third party as a crutch to work. There are many aspects to this configuration that will be co-dependent on other factors but, when tied together it all works.


First off, a legit email address is necessary that can be used for authentication purposes. Because of the strict security policies now in place for email authentication including SSL and TLS, this needs to be something that can be verified. The easiest way to do this is just create a GMail account that will be used for authentication purposes. Other accounts will work, as long as it's a legit account. When I say legit, I mean an email account tied to a real domain or company, not some fabricated address with the intent for spamming or BS activities on the web, so for this tutorial we'll use:


transport@gmail.com (replace transport with whatever you picked out).


Remember that this account is just for authentication and pushing the message around. The account will also need to have pop enabled as it's that protocol that we will be working with to fetch our mail. The reason we 're going to use pop and not Imap is because we simply want to download the mails to the machine, we don't need the advanced message synchronizing that Imap provides.


Second, we are going to need another email account based on our domain. This username and domain are going to have to cross reference identification on our computer and Postfix installation. I'll explain how all this works as we progress, then it will make more sense so that I don't jump around throwing just a bunch of do this and do that out there. For this example we're going to use:


user@customdomain.com


Our "user" should be set to the same value as the user account on the computer where Fetchmail is going to dump emails to. If the computer user account and email account don't jive together, Google will throw a nice error when pop tries to download the emails that basically says "The username is BS and the account is bogus so your request to download emails is prohibited". They'll phrase it in a nicer way but, essentially that's what it means. Think of it as protection against some a-hole trying to download your emails without proper creds.


So for example, your account on your computer is:


john@computer:~$


Then the email account should be:


john@customdomain.com


Once these two email addresses are created through whatever services one uses, we can get to the meat of things which is the Postfix configuration settings. Besides explaining what I did to make this work, I'll also throw down the links I used for guidance. They're actually really good articles and I hope they come in useful.


Using Postfix in the way we are here only relies on a couple of files which really aren't that bad to set up. For starters on this part, we want to make sure we have our hosts file set up. Edit this file and add a line like this:


127.0.0.1 localhost
127.0.1.1 user  <------- this should correlate to the user name of the domain account.

We also need to make sure Postfix can verify the user account from the aliases file which essentially cross references to mailboxes. Edit the aliases file to look similar to this:


# See man 5 aliases for format
postmaster:    root
user:    user


** Make sure to run the newaliases command after adding a new mapping


Now that those two things are out of the way, the ssl portion of the configuration needs to be done as GMail won't do anything for us without having ssl set up. Remember, everyting is under strict security so it's important to follow all the rules if anything is going to work.


Inside the Postfix directory, create a new file to hold the credentials of the email address that GMail will be using to verify that the email being sent from your computer is legit and uses the proper GMail protocols to use their mail networks. Call the file whatever you want but, the very first line of the file should read like this:


[smtp.gmail.com]:587     transport@gmail.com:password <-- \/  

** this should be the password to the account we created to verify ourselves to Google **


Save this file and cut back the authority on it since it has private information in it.


$ chown root:root <path/to/file>
$ chmod 600 <path/to/file>


Once this is done, remember to run postmap on the file so that Postfix knows to read it as part of it's configuration


$ postmap <path/to/file>


Next thing to do is verfy that ssl is installed and useful (more than likely it is but, we'll check anyway). Run this next line from a prompt and there should be a return of a lot of checks on Google's end then a ready prompt. When it gets to that point, just [ctrl]+[c] to get out.


$ openssl s_client -connect pop.gmail.com:995


If the last line from the output doesn't look like below, you're going to have to chase down that issue before anything else will fly.


+OK Gpop ready for requests from ....


Once certs are ready to go and all the above files and prep has been established, we can proceed to configuring the Postfix main.cf file. Out of habit and best practices, fist make a backup of the original file you can fall back on in case all your settings get out of whack and you need to start over. Trust me, it happens to us all and it sucks to try and chase down an original file to know where the baseline is so just do it.


$ cp main.cf main_original.cf


Now lets take a look at this file and plug in the values that we need.


smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache


smtp_host_lookup = native  <-- this tells postfix to get the data from hosts
mydomain = customdomain.com  <-- your domain name
myorigin = $mydomain
myhostname = <whatever>  <-- this is just the name of your computer
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = localhost.localdomain, localhost, customdomain.com  <-- last item is necessary
relayhost = [smtp.gmail.com]:587  <-- this is needed for pop to work on Google
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/<password_file>
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
smtp_use_tls = yes


After all this has been set then make sure to reset Postfix by running:


$ postfix reload


Test everything by sending some emails between various accounts or have some friends help you out. At this point, you should be able to send to anywhere then receive via a web interface to the transport@gmail.com account or any dektop client for checking mail. If any errors are thrown, Google search them and also be sure to run tail -f /var/logs/mail.log to check and see what kinds of errors show up in there. Use keywords from the logs to do various Google searches.


Fetching mail can be a little fussy but fairly straightforward as long as everything matches and Google doesn't detect anything that seems like BS activity. After installing Fetchmail, create the fetchmailrc file into your home folder:


$ vim .fetchmailrc


Once this is created, open, and ready for editing, add the following lines to it as instructions to Google pop engine to check the transport account for any mail that might be in the inbox and fetch them down to the specified user account for reading from your CLI.


set postmaster "user"  <-- specifies the user account to look for on your computer
set daemon 600  <-- the number of seconds to check for mail, don't do more than 5 min intervals
poll pop.gmail.com proto POP3
user 'transport@gmail.com'  <-- this specifies the address we set as our transporting address
there with password '<password>'  <-- the password to the transport account
is "user" here  <-- looks for the username on the computer
options  <-- security options
ssl
sslcertck
sslcertpath /etc/ssl/certs


Save the file after these setting and change the permissions. Run the following command as a test and it should download whatever is in the inbox on the email account as well as stuff that is already read.


$ fetchmail -d0 -vk pop.gmail.com


Check your mail log file for any errors and if any settings are wrong then Google will send a reply email with hints about what needs to get fixed for Fetchmail to work. If it works, it will instantly dump everything into Postfix then depending on what reader you have installed you will see all the emails listed there. Once you have it working, I suggest two things:


Make a copy of main.cf like: $ cp main.cf main_it_works.cf
Make a copy of .fetchmailrc like: $ cp .fetchmailrc .fetchmail_working_config


This way, if anything ever gets hosed you can simply copy the backup into a working condition.


The final stage is to activate Fetchmail to run as a regular cron and check your account every X seconds you specified so run:


$ fetchmail


That is pretty much it. I tried to keep things as generic as possible since I have no idea what kind of system this might be executed on and some configuration or commands can be specific. Getting Postfix running on a VPS is actually easier as Fetchmail isn't required and the third party stuff can be eliminated since dancing around BS ISP restrictions isn't necessary. The only catch can be making sure reverse DNS is established and working for the domain as well as changing the IP addresses in the regisrar DNS to point to the new mail server. For the type of install we did here though, it's a lot of work for something trivial like email but, the experience is worth it and if you enjoy the nostalgia of CLI mail as well as learning how to do configuration, give it a whirl and when it works just sit back and smile at your accomplishment.


References:


http://www.stevejenkins.com/blog/2013/06/howto-get-around-comcast-port-25-block-with-a-postfix-server/


http://www.axllent.org/docs/view/gmail-pop3-with-fetchmail/

http://www.postfix.org/BASIC_CONFIGURATION_README.html#myorigin

https://support.google.com/mail/troubleshooter/1668960?hl=en

Monday, May 26, 2014

Multi-Move At The Command Line

For those who spend any time in the Linux command line, it doesn’t take long for an individual to build their own little “personal arsenal” of commands that are extremely handy or get used on a regular basis. The “mmv” command is one of those handy little command tools that people may not think about as in many cases it does not come as a standard tool in many distro’s. Most repositories will have it available though so it’s not that hard to get and it’s well worth the small amount of effort it takes to load it into one’s list of useful toolbox.

What this command does is allows a user to execute multi-file commands making it easy to deal with large amounts of files in just a few keystrokes without having to concern themselves with long strings of compounded commands or extended process of dealing with files on an individual basis. Lets take a look at just a few of the basic things this handy command makes easy for us.

As always, once installed, take a browse through the man page to get an idea of all the different variations that can be played on with this command tool. There is actually quite a bit that can be done with this less common command.


One of the more prominent uses for mmv is to modify extensions in bulk so that we can maintain unification throughout our system. In my opinion, having both upper and lower case extensions floating around is absolutely annoying and as we all know, this can happen from our phones, cameras and all sorts of other devices we may interface with. keeping extensions all in lowercase also helps identify file types more easily when working from the command line. Thus from the screenshot below, we can see how to fix this problem very easily with a simple command.


Another useful way of working with large groups of files is renaming. Below shows the results of removing the preceding portion of file names globally when possible. Suppose the month were wrong, name was wrong or something else that might need to be changed on a large scale.


Global name changing can, of course, be done in the reverse order as well. Performing this action can make identifying the contents of a directory that much easier to deal with. When working with file archives, this action comes in very handy as all the inclusive files within a directory archive can quickly and easily be matched so that anyone can easily recognize what directory that file came from or should be associated with.


Some other commands that mmv makes easy is selecting particular naming conventions to be moved to another directory. In the example below, all files starting with either “a” or “A” are being moved from directory “Test2” to directory “Test3” of which both share the same parent directory. This process can make sorting through various files at a granular level very easy to do.


For the last example, mmv can also be used as a way of changing the order of the names within a large number of files. The man page uses a good example such as large amounts of music files where someone may want to change the order of album, artist, and song title around. In the example below, I simply changed the order of words associated to one file as an example using the criteria of “an” to select just one particular file that started with those two letters to differentiate from all other files within the same directory.


In closing, mmv is very handy on multiple levels and particularly handy when organizing archives of files such as multiple file backups which may have taken place over a number of months or years. In any situation when a person needs to deal with a large amount of associated files, mmv is an invaluable tool that makes bulk management that much faster and easier to get done.






Wednesday, May 14, 2014

Text Browsing

As many people are aware there are a number of different browsers a person can use to surf the web. Everyone has their own testes of what to expect from a browser or one or two that seems to work best for them that produces the results or provides the type of functionality they require. When the average user thinks of a web browser they might think Chrome, Firefox, Internet Explorer, Safari, Opera, SeaMonkey, Konqueror, Chromium, Canary, or Arora. What many will not think of are text-only based browsers as this opens a whole new world of browsing many may not consider; enter the command line based browser Elinks.

Command line browsers or “text-based” browsers are just as one would expect; no images to get in the way. The following is the man (manual) page from the Elinks text based browser.



ELinks is a text mode WWW browser, supporting colors, table rendering, background downloading, menu driven configuration interface, tabbed browsing and slim code. Frames are supported. You can have different file formats associated with external viewers. mailto: and telnet: are supported via external clients.

While sifting through the man page is all fine and good, lets take a look at what working with a text-based browser looks like. Obviously this is not the type of entertainment for everyone as the majority of people would most prefer their standard graphics rendering, full function browser. However, for those who spend a lot of time working in the command line or are curious enough to want to learn how to get the most work done from inside a terminal without leaving as possible, text-based browsers are a neat way to surf without all the added distractions that come from graphical browsers.

When starting Elinks, this is the first screen everyone will see. A person can start here by typing in any web address which of course will take them directly to that site or, cancel this action and go directly to the browsers built in navigation to perform other actions.


For this demonstration, I wanted to use something that most people will immediately recognize even in text mode with no graphics rendering. It’s the same page as one would view from any other browser, but rendered with text only inside a CLI (command line interface).


Welcome to the all familiar Google search page!

Selected fields can be navigated using the arrow keys then learning to click enter to select a field, type in text then click enter again to submit that text to the web server. Elinks will produce pop-ups that ask for confirmation that the following action is really what a person wants to do.

Typing in “text based browsers” into the Google search field produces results that look like the screenshot below.



This probably looks familiar from a layout perspective, however missing all the added “bells and whistles’ that most people are accustomed to looking at. Working with a text based browser is really great for data mining though as it removes all the “distractions” commonly associated with graphical browsers.

In the upper left are the standard options provided by Google search pages but click on [Esc] and the Elinks menu is produced that allows us to utilize all the typical browser functions a person would expect. Below is just a sample from the first dropdown.



While some may think of a text browser as “stripped down” the functionality built in is by no means lacking in capabilities one can perform similar to any other graphics style browser.

Obviously text browsing is not for everyone. For those who take an interest in working solely from the command line interface or are simply curious as to all the command line has to offer, a text browser is a must have in any arsenal and being able to use and navigate through one is not a bad skill to have.

Tuesday, May 13, 2014

Play WMV on Kubuntu 14.04

Just a few days ago I had to work with some multimedia files shared through one of my IT teams at the university. Since I’m a Linux user among Windows users, sometimes we have to “clean up” after some files that Windows systems have a tendency to produce. One such situation involves both wmv files and mp3 files that come from Microsoft software products that don’t seem to like to play well with Linux systems; here’s how to get around that.

Checking the properties of the wmv files we find this added “baggage” that doesn’t need to be there. A little research on ASF produced from Wikipedia comes back with:

Advanced Systems Format (formerly Advanced Streaming Format, Active Streaming Format) is Microsoft's proprietary digital audio/digital video container format, especially meant for streaming media. ASF is part of the Windows Media framework.



So what we need to do is “clean” the file and remove the baggage out that is preventing us from playing the file just as any other format.We can do this easily by using a program called Format Junkie which may not easily be found from the Ubuntu Software Center but can be located with instructions on how to install through any web search; don’t worry, the binaries are available.



The interface is super easy to figure out so just add the file with the extra “bugs” in it and select the output type from the drop down. To change the output directory select Edit at the top and go into the Properties. If it seems like it’s not doing anything just be patient, it’s working.



Above is the screenshot of the “cleaned” file with the bugs missing. I changed the format type in this example but it’s easy to keep the type if one wants to. Below is VLC playing the file which once wouldn’t do anything for me on Kubuntu 14.04.



This file was originally a ppt that I converted to a wmv using MSO2010 then needed to inspect the output prior to having it hosted online.

As always, I hope this tutorial helps some others get around certain file types on their Linux systems as well so they can press forward with whatever projects they may be working on. In retrospect, wouldn't it be so much easier if all our operating systems could just play nice with each other?

Update 1/15/2015:

For those who would prefer not to rely on a GUI based format converter, try pacpl. This program is written in Perl and supports a substantial number of formats.

http://vorzox.wix.com/pacpl


Wednesday, April 30, 2014

Git From The Command Line

After spending a few years being involved with the Linux community, one of the items that has always amazed me is the number of people who still are not sure how to use git from the command line. There are a number of GUI programs available to make this process easier for those who are uncomfortable using the CLI, but in reality the process of editing and pushing a program from one’s computer to the Github repositories is not very difficult when someone takes the time to show others how to go about the basic steps.

In this tutorial, I’m going to cover the basics of how to open a program using Vim, make a small change to that program, save it then push the changed program up to the Github repository. All of this will be done from using the terminal so no extra programs need to be used. This is often the working scenario for people who work in a GUI-less environment or have to interface with headless servers. I’m taking into account that the person following this tutorial has already followed the instructions of setting up a Github account, repository and established an ssh key pair with their account. The instructions for setting up git are easy to follow and can be located here:  https://help.github.com/articles/set-up-git.

Once a remote account and repository has been established then connected to a local directory, the rest is simply going through the motions of making it all work. For starters, open a terminal session such as Konsole, X-Term or whatever comes with the operating system.

start.png

Once the terminal is up and going then simply navigate to the directory that was created specifically for communicating with the established Github repository using git.

cd_to_git_dir.png

Now that we are working inside the established git directory, the next thing to do is open the particular program we are going to be working with. For this example, we are simply going to open the all-too-familiar “Hello World” starter using the Vi or (Vim) editor. Vim runs directly in the terminal rather than opening a separate program. This makes life much easier as we don’t have to leave the environment we are already working in. To open Vim and start working, just type in the following command:

start_vim.png

Once this is executed, the terminal will automatically switch over to the Vim editor for making any necessary changes to the program.

editing_a_program_with_vim.png

Remember this is all happening without starting any external programs! Make any changes to the program using the Vim editor then save those changes. When the changes have been saved, Vim will automatically close and bring the user back to the command line where they left off.

The next step is to get a status report from git that shows us what has changed from the last time we accessed our local repository or git directory. The command to see any changes made is “git status”.

input_git_status.png

When this command is invoked, git is going to give a response that shows all the changes that have been made to any programs in the git directory we are currently working in.

message_from_git_after_running_status.png

In this case, we only modified one program and that modification is being reflected here in red text. If we had been busier and had multiple changes to many programs all in the same git directory, all those files would show in the list with hint notes of suggested actions based on the action to to do next.

In this basic example, we are simply going to invoke the command “git add” which is similar to clicking on a checkbox in a GUI and selecting the file to be added to the next operation. If there were more than one file, we could add one or a few to be added to our next commit which will get a specific message attached to it that lets others know what changes were made in case an entire group is working on the project the file is associated with.

input_git_add_to add_file_for_pushing.png

Once the “git add” command is executed, git will not return any acknowledgement of what has been added unless the command is executed incorrectly. Sometimes I believe it would be nice to receive a confirmation of what has been added to our list but, I’m not a developer for git so they decide what is relevant and what isn’t.

Once all of the files to be added to the current commit have been added to our list of files, it is time to commit them to be pushed up to the remote repository. Git strongly suggests that comments be added to any commit as it is part of the “essence” of collaboration using a version control system (VCS). To make this process easy, we invoke the “-m” to our command and add a message to be associated with the file’s commit such as below.

input_git_commit_string_with_memo.png

The string contained in the quotes is the message to be added to our commit. This message will accompany all the files that were included in the “git add” process. As such, if we want to have a separate message associated with multiple commits, then only add certain files at a time that will receive any particular commit message. If for any reason the “-m” is left out of the commit syntax, the terminal editing program”nano” will execute by default and ask for the message to be attached to the commit. It’s much easier to just remember to include the “-m” in my opinion unless a person prefers the added step of using the nano editor.

When the commit command is executed, a message similar to the following will be displayed from git:

git_commit_return_message.png

This message just reiterates the changes that were made to the file, the commit message that will be added to the commit as well as the relevant version control numbers that will be associated with the commit when pushed up. It should also be noted that it is completely acceptable to stop at this point and go work on something else. Each command is it’s own entity and therefore do not need to be executed immediately. For example, a person may want to only push all their changes to the remote repository on Friday afternoon or on Monday morning. Or, a person may simply run a cron job to have all their pushes done on a certain day of the week at a certain time automatically.

For this example though, we’re going to go through each step individually so that we get the basics of what’s going on before jumping into ideas of batch pushes or cron jobs. : )

The next operation to perform is just simply execute the command “push” like shown below:

input_push_file_to_remote_repository.png

Since this entire time we have been working in our established git local repository, the “push” command is going to push our local changes to our remote server repository located at Github. This how we share our work with others so we can all collaborate on the same project while working from anywhere. Once the “push” command is executed (in most circumstances) the following message will appear:

input_password_for_ssh_key.png

What this message is asking for is a password associated with a registered ssh key at the remote end. Having an ssh key pair is not a requirement to use Github VCS although, it should be noted that it is an excellent security and identity measure to verify that the changes being made in the repository are actually coming from you and not someone else. An ssh key pair is like a fingerprint that distinguishes a person and the computer they work on from everyone else in the world. Even if a person were to try and imposter someone from a different terminal, without the private key being on their machine to match with the public key registered at Github, the push would not be accepted. As such, a person needs to understand that using key pairs is beneficial from a security point of view but can also be a hindrance if working from a foreign machine.

From this point we type in our associated password that identifies us from everyone else and allows the ssh tunnel to push our code from our machine to the remote server securely and encrypted. When finished, git will give us a nice confirmation message that lets us know everything was completed as expected.

git_message_after_pushing.png

At this point we have finished editing our file and pushing it up to our remote repository and accomplished all of this without having to leave the terminal for any reason. The next step is to verify that our changes did in fact make it to their intended destination which involves opening a browser and navigating over to our Github account.

Once we do that, we should be able to go to the repository that is associated with our local git directory and find the file in question as well as the commit message we attached to that file.

github_verification.png

As can be seen from this snippet image of the Github repository line, the file “hello_world.pl” was received and in the center is the commit message we added using the “-m” on thecommand line. Navigating the various links on Github will provide further details of our commit including the actual code and all associated changes highlighted for ourselves or others to refer to later.

I hope this tutorial has explained in detail the simplicity of using the command line to both edit files and push those edited files to the Github version control all done from the command line.