To schedule the health report from a script and
cronjob, we'll need to use key based authentication rather than interactive passwords to access the remote ESX servers. We'll also encrypt the private key with a passphrase. That way if the filesystem security were ever compromised and someone were able to obtain the private key, they would still need the passphrase to unlock it and gain access to the remote systems.
Encrypting the private key presents a problem, however, as unlocking it during a connection attempt requires entering the password interactively. Thankfully there is a solution,
ssh-agent, which will allow us to unlock the private key once with an interactive password prompt, and then keep it in memory until
ssh-agentis terminated or the server is rebooted.
SSH authentication using keys
If you are new to the concept of using key based authentication for SSH, a quick Google search for 'ssh using keys' will provide a wealth of info. Here's a couple of links that explain it much better than I can: http://www.sshkeychain.org/mirrors/SSH-with-Keys-HOWTO and http://wiki.archlinux.org/index.php/Using_SSH_Keys
And this link covers some challenges with using
To get started, we'll generate a 2048 bit RSA key pair for authentication. There's plenty of debate on the merits of DSA over RSA, and vice versa, but we'll flip a coin, and pick RSA.
Logged in as the non-root account you are planning to use for the ESX health report, execute this command to generate a 2048 bit RSA key pair:
When prompted with: Enter file in which to save the key, hit return to accept the default location. At the prompt: Enter passphrase (empty for no passphrase):, enter a strong passphrase for the private key.
If you type
ls -lain the user's home directory, you should see that a
.sshfolder was created. If you
cdinto that directory, you'll find the private key file,
id_rsa, and the public key file,
id_rsa.pub, have been created.
Distribute the public key
For the next step, we need to copy the public key just generated to each ESX host we want to SSH into and execute the health report script on. You can simply
scpthem, or even use a Windows SFTP client if you wish (yuck). One issue with that approach is that unless you have used the SSH client or gone through the
ssh-keygenprocess on each remote host, the necessary
.sshfolder hasn't been created in the user's home folder. The following script will take care of the whole process; SSH to each host, create the
.sshdirectory if needed, and add the public key to the
authorized_keysfile on the remote ESX server.
By default, the ESX server firewall blocks outgoing SSH client connections, so issue this command as root on the central reporting ESX server to enable outbound SSH:
To use the public key distribution script below, paste the entire code block into a
puttywindow, then execute the script with the list of ESX hosts you wish to copy the key to. You'll get a bunch of password and key fingerprint - authenticity prompts, but we only have to do this once. Remember to run this from the ESX server that will be polling the others, logged in as the non-root user that will be executing the health check script:
copykey.shscript with a space delimited list of ESX hosts:
./copykey.sh esx02.vmnet.local esx03.vmnet.local esx04.vmnet.local
Or read them from a text file if you have a lot of hosts. The text file can be space delimited or have each host on a new line:
./copykey.sh $(cat hostlist.txt)
copykey.shscript, notice how we used the command substitution syntax,
$( ), to echo the text of the public key file into the
authorized_keysfile on the remote host. The local shell interprets the command substitution before the SSH command, so it executes
caton the local public key file. This is a handy trick, and we'll use it later to execute the locally stored health report shell script on the remote hosts.
Keep the private key unlocked with ssh-agent
Now that the public keys are pushed out, make a test SSH connection to one of the remote servers with a
ssh somehostcommand. If you've set everything up correctly to this point, you should receive a prompt like Enter passphrase for key '/home/user/.ssh/id_rsa':, which is different from the user@host password: prompt of a typical SSH connection. We're being prompted to decrypt the local private key before the key algorithm is run to verify the connection attempt. Obviously, that's not going to work from a
This is where
ssh-agentcomes in. If you run it from a putty session, you should get some unusual output like:
The output is providing you with the environment variables to use in order to locate the
SSH_AUTH_SOCK=/tmp/ssh-UIbA2689/agent.2689; export SSH_AUTH_SOCK; SSH_AGENT_PID=2690; export SSH_AGENT_PID; echo Agent pid 2690;
ssh-agentPID and socket. The application doesn't actually export any of the information into your shell, it expects you do that. You can test this out by typing
echo $SSH_AGENT_PIDafter running
ssh-agent; the variable isn't defined in the current shell.
There are a couple of ways to fix that, you could invoke it like
ssh-agent bash, which will open a new bash shell with the variables exported. Or you could execute it with
eval $(ssh-agent)to export the variables into your current shell. Since we won't be using it interactively, but rather from a
cronjob, we'll redirect the output from
ssh-agentinto a file, and then source that file from the
Something has to get
ssh-agentrunning every time the ESX server is rebooted or someone kills the process, so let's create a handy
start-ssh-agent.shshell script in the non-root user's home directory:
ssh-addcommand at the end of the script loads the private key into
ssh-agent, and will prompt for the private key passphrase. Once the key is loaded, you can log off and ssh-agent will continue to run until the process is killed or the server is rebooted. You'll need to run
start-ssh-agent.shfrom an interactive login each time the ESX server is rebooted, but that's probably not very often, and the added security of using an encrypted private key certainly makes up for the hassle.
Execute the script above by typing
~/start-ssh-agent.shto load the private key into
ssh-agent, and we can test the health report script on multiple hosts. Paste the following into a
puttywindow, after replacing the hostnames with your own, and the script output should display on your terminal:
[ -d ~/esx-report ] && cd ~/esx-report source ~/.ssh-agent; \ for host in esx02.vmnet.local esx03.vmnet.local; do \ ssh -q $host "$(cat esx-report.sh)"; done
Notice again how we used command substitution,
$( ), to
catthe locally stored script file through the SSH session, running the commands in the script on the remote host. For a small script like
esx-report.sh, this is a really simple and efficient method, and it makes it very easy to add additional checks to the script.
Coming up in Part 3, we'll take a look at emailing the script output in HTML format, and tie the whole process together from a