January 6, 2010

Build a Windows 7 PE boot CD with defrag

If you deploy virtual machines from templates using the standard vSphere tools, you know how valuable it is to get your templates as small as possible. A handy trick I learned a while back is to use vCenter Converter to import a virtual machine that I'm working up to be a template, and choose to resize the disk to the minimum size. The process is pretty simple: configure the system as though it were going to be deployed as an image, remove every temp file and hotfix uninstall folder, run sysprep and power down, boot up from an .iso of a rescue boot disk with a defrag utility, delete the pagefile.sys file, run a defrag, power down again, and use vCenter Converter to import the powered off virtual machine as a new machine and shrink the virtual hard disk during the process. Using this procedure, I've gotten a Windows XP template down from a 5.27 GB minimum size in vCenter Converter, to 3.08 GB. That's an almost 42% reduction in size, and is a huge time saver when deploying new systems.

In the past I have used a customized BartPE boot disk for this, but I wanted a more lightweight solution that booted up faster and only had the tools needed for the specific job. Windows PE fit the bill, but doesn't include defrag.exe by default. There are some tutorials out there on how to get defrag working in the Vista version of Windows PE, but I don't have any Vista installs around, and would like to keep it that way. So I dug around for a bit, and was able to get defrag working in the Windows 7 version of PE.

Windows AIK
First thing, you need to download a copy of the Windows Automated Installation Kit (just google Windows AIK). There are three versions of the AIK now, and the latest one supports Windows 7, so make sure you download the Windows 7 version.

  • Insert the AIK DVD you created and open startcd.exe if the main menu screen doesn't open automatically. Choose Windows AIK Setup from the menu.

  • It's your standard install process, click through the defaults and when it's finished you'll find a Microsoft Windows AIK folder has been created in the Start menu.

  • Open the Deployment Tools Command Prompt from the Microsoft Windows AIK menu, which will bring up a command prompt with the AIK executables added to the PATH environment variable.

  • In the command prompt, type the following, using C:\PEBuild or another folder name for your build folder. The command expects to create the build folder, and will fail if the folder already exists:

    copype.cmd x86 c:\PEBuild

    The copype.cmd script copies the files and directories needed to build a PE boot disk for a specific architecture, which is x86 here. Note that the script also changes the working directory to the path we specified for the build files. Make sure you stay in this path for the following commands.

  • Now we'll use the ImageX tool to mount the default Windows PE image that ships with AIK in read/write mode on the local filesystem. By mounting the image, we'll be able to add some files to it:

    imagex.exe /mountrw winpe.wim 1 mount

  • This next command copies the ImageX deployment tools to the PE boot disk, allowing you to capture an image of a hard disk in ImageX format. This isn't really relevant to this project, but we may as well add them in case we want to use them later:

    xcopy "C:\Program Files\Windows AIK\Tools\x86\*.*" mount\ /s

  • Here comes the customization part, and we'll need two files from a running Windows 7 workstation. The commands we've used so far will produce a 32-bit Windows PE image, so be sure the Windows 7 system you copy the files from is also 32-bit. My first attempt at this was on my 64-bit workstation, and when I tried to use the defrag utility from within Windows PE, I got a message that it wasn't compatible with the version of Windows I was using.

    From the 32-bit Windows 7 workstation, copy:


    - and -


  • Now unmount the build image with another ImageX command, but make sure you don't have any Windows Explorer windows or the command prompt open to the mount folder first:

    imagex.exe /unmount mount /commit

  • If you type a dir command, you'll see the modified winpe.wim image file in the C:\PEBuild folder. We need to copy this to a specific folder and file name for the next few AIK commands to work:

    copy /y winpe.wim ISO\sources\boot.wim

  • This command will create the PE .iso file in the c:\PEBuild folder, naming it pe-defrag.iso, so modify for your build path and desired file name. This is a long command, so make sure you get the whole line if copying it by double clicking in the grey box:

    oscdimg.exe -n -b"C:\Program Files\Windows AIK\Tools\PETools\x86\boot\etfsboot.com" C:\PEBuild\ISO C:\PEBuild\pe-defrag.iso

Boot it
Now you can either copy the .iso file over to an ESX server and VMFS LUN, or use the local vSphere client to attach the .iso to a VM. Boot up a test VM with the PE .iso. After a minute or so, you'll find the VM has booted up to a command prompt, with a grey, Vista'ish background. Pretty boring, but a command prompt is all we need for the next few steps.

  • In the command prompt, change the working directory to the (hopefully) offline mounted NTFS volume that holds the operating system files for the template:


  • Since the pagefile.sys file is hidden, enter a dir command that shows hidden files:

    dir /a:h

  • Remove the pagefile.sys file, and notice that the del command also needs to have hidden files specified:

    del /a:h pagefile.sys

  • Now defrag the volume, forcing it to defrag all files (-w) and print verbose output (-v):

    defrag c: -w -v

  • You can exit PE with an exit command, but that reboots the VM, which has ruined a few sysprepped images when I forgot to interrupt the boot. So now I use the shutdown command to force it to shut down:
    C:\WINDOWS\system32\shutdown /s

    The shutdown command can take a few seconds to actually power down the VM, so be patient.

Seal, shrink, deploy
You'll want to run the defrag on a template that's been sealed up with sysprep. After deploying one of these shrunken templates, you'll need to edit the virtual hard disk before the first power on and expand it to a reasonable size. Windows will automatically expand the operating system partition during the first boot if you include the command ExtendOemPartition=1 in the [Unattended] section of the sysprep.inf file for a Windows XP/2003 image, or ExtendOSPartition=true in the Unattend.xml file for a Vista/Windows 7 image.

If you deploy templates in VMware Workstation, you'll find that you aren't allowed to just simply edit the virtual hard disk and expand it. For deployments in Workstation, you'll need to use the C:\Program Files\VMware\VMware Workstation\vmware-vdiskmanager.exe utility to expand the .vmdk before the first boot.

...read more

July 23, 2009

Virtualize Cisco Routers with GNS3

This title is a little misleading, I figure I get to do that at least once. Yes, GNS3, http://www.gns3.net, emulates the hardware of several Cisco router platforms, and it will boot real IOS images. It's also free, easy to install, and there are abundant video and written tutorials. But you're not going to use it in your production environment.

So what's it good for? Well, there's the obvious; if you are studying for a Cisco certification test, GNS3 is like a dream come true. Though there have been Cisco emulation packages around for a while, they are usually expensive, and only provide a small fraction of the IOS feature set.

But even if you're not studying for an exam, or have no interest in Cisco IOS, GNS3 is a fantastic tool for testing network and service architecture designs. When used in combination with VMware Workstation, connecting your virtual machines up with a GNS3 topology takes about three clicks. In the simple diagram above, I've connected two VMs across a virtual WAN link, utilizing two Cisco 3640 routers and OSPF as the dynamic routing protocol. The computer shapes are really cloud objects that I've customized the shape for. When you configure a cloud object, the vmnet virtual networks appear in the drop down list of available NIO Ethernet objects. To connect a virtual machine to a GNS3 virtual router, just add the virtual network the VM is plugged into in the cloud configuration window, and then create a Fast Ethernet connection to a router.

The mind boggles at the thought of all the projects you can test with this setup. Imagine being able to test a change to the Active Directory infrastructure between multiple WAN sites, or verify client connections after changing from a static routing configuration to OSPF. A few months back I completely modeled a client's network infrastructure using GNS3, and proved how they could reduce their routing tables by about 10,000 entries by configuring EIGRP route summarization on a single link.

If you are running a recent version of Ubuntu on your demo laptop, installing GNS3 is as simple as sudo apt-get install gns3. The install is pretty painless on XP as well, and possible on Vista (though I've read of folks having a lot of issues), but GNS3 runs a lot slower and you won't be able to run as many router instances with XP or Vista.

My current demo laptop, a Core2 Duo T9600 running Ubuntu 9.04 64-bit, VMware Workstation 6.5 and GNS3 is the coolest thing since sliced bread. And I would know, because I eat a lot of sandwiches.

Oh that's bad..... but it's true, I do eat a lot of sandwiches

...read more

July 12, 2009

Know Your History

The UNIX/Linux shells evolved on a planet where saving every keystroke and millisecond of time was absolutely essential to survival. As a result, they're chock full of shortcuts, many of them with overlapping functionality, letting the user choose the method that works best for them.

The shell command history is a prime example. Even a newborn knows to use the up and down arrow keys to recall commands, and most toddlers are piping history into less to perform manual searches for complicated commands they don't feel like recreating. But if you ever have the chance to stare over the shoulder of a grey-bearded shell guru, you'll see that the true masters use several different techniques to pull up commands in the most efficient way. The following report was compiled from the sage advice given by these mysterious wizards.

We'll start with the classic C Shell history syntax. You've no doubt used an exclamation point to re-execute a command from the history list, like !42, or have used the double exclamation to execute the previous command, !!. You can get pretty fancy with the available event designators, for instance, !-3 will execute the command three commands ago. And !echo will execute the most recently run command that began with echo.

This syntax has always scared me, as I could see myself executing a dangerous command without realizing it, especially if I'm in a hurry. But there is a safer way to use it, just include the :p modifier, which displays the command, but doesn't actually execute it. For instance, if the last command executed was echo Hello, typing !!:p would preview the command without running it, so you'd see echo Hello, instead of Hello. This allows you to make sure it's the command you intended to recall, and you can simply press the up arrow to execute it.

The ! syntax can be really useful in a grep-the-history-file workflow. Say I grep for the last time I issued an esxcfg-firewall command:

  history | grep esxcfg

    203  esxcfg-firewall --openPort 9090,tcp,in,SimpleHTTP


    esxcfg-firewall --openPort 9090,tcp,in,SimpleHTTP

Now if I hit the up arrow, the command is put on the prompt, and I can edit it to add a different port, etc.

If you'd like to delve deeper, google c shell command history

Fix Command
Another method for recalling commands from the history list is the bash built-in Fix Command, invoked with fc. To get the skinny on fc, bring up the man page for the built-in commands, with man builtins. Invoking fc with the -l option will print the last 16 commands. You can also specify a range of history commands to display, like:

  fc -l 208 234

The fc command can come in real handy when you are trying to recall and edit a whole series of commands. For instance, say you remember adding an alias to your .bashrc file, and you want to add an additional alias using the same series of commands to make sure it was configured properly. You recall using a cp command first to backup the .bashrc file, and specifying 'cp' as a search string after fc -l will print the history list beginning with the last occurrence of the command that matches the search string:

  fc -l cp

  110  cp .bashrc backup_bashrc
  111  echo "alias lax='ls -lax'" >> ~/.bashrc
  112  cat .bashrc
  113  . .bashrc
  114  lax

To create an la alias, invoke fc with a range of history events to copy them all into the default editor, which is set to vi in the ESX console:

  fc 110 114

Running the above command will copy the history events 110 through 114 into a vi editor session, where they can be modified to create the alias for la. Typing ZZ in vi will exit the editor and execute all the commands in the buffer: backing up .bashrc, creating the alias, cat'ing the file to see the change, restarting bash, and finally testing the new la alias. I don't use fc very often, but for scenarios like this it is a great tool to be familiar with.

vi Command Editing
The next two methods for using the command history rely on functionality provided by the GNU Readline library, a standard set of C functions available to any application, including bash. There are two editing and key-binding modes, emacs and vi. The default mode is emacs, and you can see what mode your shell is in now by typing:

  set | grep SHELLOPTS

If you're in the default emacs mode, you'll see emacs in the colon-separated list of options. To change this to vi mode, type:

  set -o vi

Now if you check the SHELLOPTS variable, you'll find emacs has been replaced with vi. I'm a vi kind of guy, so I always add the set option to my .bashrc file, with a command like:

  echo "set -o vi" >> ~/.bashrc

And source .bashrc again to get the changes:

  . ~/.bashrc

We'll look at how to use the Readline library in vi editing mode with an easy example. I've just grepped the default word file that comes with the service console for every word that begins with 'a' (this word file is not present in ESX 4.0). I'll do the same for the letters 'b' through 'f', and if I execute history, this is the output:

[admin@esx02 admin]$ history
    1  grep ^a /usr/share/dict/words
    2  grep ^b /usr/share/dict/words
    3  grep ^c /usr/share/dict/words
    4  grep ^d /usr/share/dict/words
    5  grep ^e /usr/share/dict/words
    6  grep ^f /usr/share/dict/words
    7  history

Back at the shell prompt, we can search through the history list for the last instance of the grep command. Press Esc to enter vi command editing mode, and then use a forward slash to search, just like in a normal instance of vi:


After hitting return, we should find grep ^f /usr/share/dict/words has been placed on the command line. Pressing n will iterate through each grep command until the first instance is found -- the grep command for words starting with 'a'. Continuing to press n will do nothing now, as we've reached the end of the matches for grep. However, pressing N will now iterate through the grep matches in reverse, working its way to the most recent grep command. This is handy and easy to remember, n or N for next, forward or backward through the history list.

Of course, the whole point of accessing the history list with this method is to easily edit the commands before executing them. If we wanted to modify the grep command to find all words starting with 'fox', we can just press Esc at a bash prompt to enter vi command editing mode, type /grep ^f to find the history entry where we searched for words starting with f, press i to enter vi insert mode, edit the command: grep ^fox /usr/share/dict/words, and press return to execute it. If we want to get every word that starts with ox, we just perform another search, /grep ^fox, move the cursor over the 'f', and hit x -- the vi delete character command -- to remove it.

This example is beyond goofy, but if you play around with this method, you'll find it to be very powerful and a huge timesaver.

If you press Ctrl-r in your terminal, you'll get a curious looking prompt:


If you start typing a part of the command you wish to search through the history for, the command will appear after the prompt:

  (reverse-i-search)`grep': grep ^f /usr/share/dict/words

This is the most recent grep found while searching back through the history list. If you press Ctrl-r again, you'll find the one before that:

  (reverse-i-search)`grep': grep ^e /usr/share/dict/words

If you hit return, the command will execute. If you hit the left or right arrow keys, the command will be left on the prompt so you can edit it first.

The reverse-i-search prompt is a bash built-in named reverse-search-history. You can see that by default it is bound to the Ctrl-r key by issuing a bind -P command. If you look through the list, you should also see forward-search-history has been bound to Ctrl-s. The forward-search-history command is a useful companion because if you play with Ctrl-r, you'll notice that as it iterates through the history list, it remembers where it left off. So if you reverse search all the way to the start of the history file, you then have to cycle back around, even though the command you want was just one command more recent.

But there is a big problem with forward-search-history, it's bound to Ctrl-s by default, which is also bound to the stty stop output command, and stty will intercept it before anything has a chance to see it (if you already pressed Ctrl-s and your terminal appears dead, just type Ctrl-q to bring it back to life).

You can view the key bindings used by stty with this command:

  stty -a

But these two commands are too handy to let a little key binding issue get in the way, and we can easily add custom key bindings to our .bashrc file:

  bind '"\C-f": forward-search-history'

  bind '"\C-b": reverse-search-history'

Just edit the ~/.bashrc file with the editor you prefer, add those two lines in, and re-source with: . ~/.bashrc

Now Ctrl-b searches back through the history file, and Ctrl-f searches forward, and you can toggle back and forth between them and the search term will remain. Nice!

Share your secrets
Hopefully you found a new trick to play with, and if you have your own history workflow, please leave a comment with the details.

...read more

June 30, 2009

Cut Your Exchange Backup Window in Half

Going all the way back to the Exchange 5.5 days, I've preferred doing disk-to-disk-to-tape Exchange backups. I'll use NTBackup for the disk-to-disk part, and the regular file backup agent that comes with whatever backup software we happen to be using for the to-tape part. This means eschewing the Exchange backup agents that most vendors provide, which in my mind is a big plus. Opinions will certainly vary, but using this method provides cost savings, fast restore times, extra protection and redundancy, and the reassurance that comes with using the native Exchange backup application. If you need more convincing, there are a few white papers from Microsoft describing this same setup as the Exchange backup method used by their in-house staff.

A client that is currently using this configuration has just surpassed the 200 GB mark for their combined mailbox store size, and the nightly full backups were taking a little longer than 6 hours to complete. This large backup window was preventing the nightly database maintenance tasks from completing, so a new strategy was in order. While thinking through some possibilities, I remembered reading about some registry tweaks that could improve the NTBackup performance when backing up to disk. After a little research, I made the changes, and the results were almost unbelievable: the Exchange backup job that had previously taken more than 6 hours to complete now finished in just under 2 1/2 hours!

While taking advantage of this dramatic speed boost only requires three registry changes and an additional command line parameter, there is a big bummer at first glance: the NTBackup registry keys that need to be changed reside in the HKEY_CURRENT_USER hive. This really cramps my style as I always configure the scheduled task that kicks off the NTBackup job as the NT AUTHORITY\SYSTEM account with a blank password. If you work in an environment with strict password change policies, even for system accounts, you know the pain of having to maintain passwords in scheduled tasks and scripts. Life is so much easier if it can just be avoided. But since the system account doesn't execute NTBackup interactively, the registry keys don't get created, and I assumed this meant there was no way to have the application check for the configuration tweaks.

But thankfully I was wrong, and it's a pretty simple process to manually create the necessary keys in the right spot:

  • First of all, you need to actually complete a backup job once to get the registry entries all set up, so as a regular administrator on the Exchange server, launch NTBackup, select a single temp file somewhere to backup, let the job run to completion, and then just delete the temporary backup set.

  • Launch regedit, and drill down to HKEY_CURRENT_USER\Software\Microsoft\Ntbackup\Backup Engine

  • You should already see the values we're about to change, if not, something didn't get created properly, so try a manual NTBackup job again. If the keys are present, make the following changes:

    • Change Logical Disk Buffer Size from 32 to 64

    • Change Max Buffer Size from 512 to 1024

    • Change Max Num Tape Buffers from 9 to 16

  • After making the changes, select the Backup Engine key from the left pane, and right click and select Export. Save it as a .reg file, and make sure Selected branch at the bottom of the Export window is set to HKEY_CURRENT_USER\Software\Microsoft\Ntbackup\Backup Engine

  • Now we'll locate the system account's registry settings, with regedit still open, browse to HKEY_USERS\S-1-5-18\Software\Microsoft\Ntbackup. The S-1-5-18 is the standard identifier for the system account, and unless you've scheduled NTBackup to run as NT AUTHORITY\SYSTEM before, the key will most likely be empty.

  • We need to schedule a job to run as NT AUTHORITY\SYSTEM to create the default keys, so launch NTBackup in advanced mode, select the Schedule Jobs tab, and set up a temp job to just back up any text file and schedule it to run in a couple of minutes from now. When prompted for the credentials that should be used for the job, you'll need to change the user account to NT AUTHORITY\SYSTEM with a blank password, several times. In fact, it still won't save it as the account to use, so after saving the scheduled job, open the task from the Scheduled Tasks panel and change the user account to NT AUTHORITY\SYSTEM with a blank password again.

  • After the job runs, you should see the following registry keys have been created under HKEY_USERS\S-1-5-18\Software\Microsoft\Ntbackup; Backup Engine, Backup Utility, Display, and Log Files. But if you drill into Backup Engine, you'll see it didn't create the keys we modified a few steps ago.

  • To easily create the keys, just edit the .reg file we exported earlier in Notepad. Change the line [HKEY_CURRENT_USER\Software\Microsoft\Ntbackup\Backup Engine] to [HKEY_USERS\S-1-5-18\Software\Microsoft\Ntbackup\Backup Engine], and save the file.

  • Now right click the .reg file, and select Merge. You should find the registry settings have been created for the system account, and NTBackup will now use the much speedier settings even when running as the system account.

There's another performance mod we need to make to give the backup even more boost. Since Windows Server 2003 Service Pack 1, NTBackup has been equipped with a secret and offensively named /fu switch, for 'file unbuffered' mode. To bolt this on, just edit the Scheduled Task for the NTBackup job, and add the /fu switch after the /hc:off parameter. When you're done, the Run: text box of the Scheduled Task will look something like this:

  C:\WINDOWS\system32\ntbackup.exe backup "@C:\Documents and Settings\
    Administrator\Local Settings\Application Data\Microsoft\Windows NT\
    NTBackup\data\Exchange_Daily.bks" /n "exchange_Backup.bkf created 
    6/30/2009 at 6:06 PM" /d "Set created 6/30/2009 at 6:06 PM" /v:no 
    /r:no /rs:no /hc:off /fu /m normal /j "Exchange_Daily" /l:s /f 
    "E:\exchange_ backups\exchange_Backup.bkf"

...read more

June 10, 2009

Configure a Vyatta Cluster for Redundant Virtual Firewalls

If you missed the Protect the Service Console Network With a Virtual Firewall project, we looked at how to use a Vyatta firewall to protect the ESX Service Console network and restrict SSH and VI or vSphere Client access to only a few specific workstations. Vyatta offers an impressive network operating system that can be run from a live CD, permanently installed on physical or virtual hardware, or downloaded as a virtual appliance. It comes with some high end features like stateful packet inspection, site-to-site VPN, OSPF, and BGP. There's a completely free edition with unrestricted access to all the features, but it can also be purchased with support offerings.

If you followed along with the original post, you may have noticed a potential pitfall: what if the ESX server hosting the Vyatta virtual machine goes down? You may have HA enabled, but what if it takes several minutes for the VM to boot all the way up on another host? Even a couple of minutes with no SSH or VM console access during a crisis would feel like an eternity.

Amazingly, the Vyatta operating system also includes clustering, and it's very simple to configure. To set up a cluster, we'll need the following:

  • Two Vyatta VC5 virtual machines, preferably with very similar configurations, see Protect the Service Console Network With a Virtual Firewall for a quick setup tutorial

  • Both Vyatta VMs will need two virtual NICs, each with its own real IP address: one in the Service Console network, and one in a LAN network

  • The clustered Vyatta VMs will host two virtual IPs: one will be the default gateway address configured on every ESX host ( in our case), and the second will be a LAN address you specify as the route to the Service Console network on the Layer 3 device routing between your LAN subnets. In our case, we have a very simplified setup and the Vyatta's LAN facing interface is the default gateway for the LAN (

There's some good documentation on setting up a cluster in the High Availability Reference Guide for VC5 available for download at the Vyatta website.

Once you've got the two Vyatta VMs up and running on different ESX hosts, this is how the cluster configuration will look on the primary Vyatta firewall. We're using a conservative dead-interval of ten seconds, meaning a failover will only occur if keepalives are missed for that long, and keepalives are being sent out every two seconds over the eth0 (Console Network) interface.

The service commands define the virtual IPs the cluster will bring up on the secondary if the primary stops responding:

cluster {
    dead-interval 10000
    group sc-cluster {
        auto-failback true
        primary sc-firewall-pri
        secondary sc-firewall-sec
    interface eth0
    keepalive-interval 2000
    pre-shared-secret ****************

interfaces {
    ethernet eth0 {
        hw-id 00:50:56:9c:3b:0b
    ethernet eth1 {
        hw-id 00:50:56:9c:04:a5
    loopback lo {

And here's the cluster configuration on the secondary firewall. The actual cluster commands are identical, only the real IPs assigned to the interfaces are different:

cluster {
    dead-interval 10000
    group sc-cluster {
        auto-failback true
        primary sc-firewall-pri
        secondary sc-firewall-sec
    interface eth0
    keepalive-interval 2000
    pre-shared-secret ****************

interfaces {
    ethernet eth0 {
        hw-id 00:50:56:9c:3c:e0
    ethernet eth1 {
        hw-id 00:50:56:9c:38:04
    loopback lo {

As you can see, it's pretty simple to set up, but one annoyance with the cluster feature is that you have to create the same firewall rules on each device, there's no functionality for syncing up the configurations.

It wasn't too hard to write a quick and dirty little shell script to copy just the firewall configuration from the primary to the secondary, however, so you only need to maintain the rules on the primary, and then remember to run the script after saving the changes. If you like, you can set up public key authentication for SSH access from the primary to the secondary like we did in DIY ESX Server Health Monitoring - Part 2, but it's not necessary, the script will prompt you for the password during the SSH connection attempt.


# Vyatta cluster firewall sync script
# by Robert Patton - 2009
# Copies firewall rules from primary to secondary
# and applies them to the appropriate interfaces.
# Deletes existing firewall rules on secondary and
# removes any firewall sets on interfaces, so make
# sure this is only run from the primary.
# Replace the SECONDARY value with the hostname or IP
# of the secondary device in the cluster.



# Match just the firewall section from the boot config file
awk '/^firewall {/, /^}/' /opt/vyatta/etc/config/config.boot > $TEMPFWRULES

# Match the interface section, we filter for firewall set statements later
awk '/^interfaces {/, /^}/' /opt/vyatta/etc/config/config.boot > $TEMPINTCMDS

# Create a script to run on the secondary with the firewall set commands
# The vyatta-config-gen-sets.pl script creates set commands from the config
# First remove any firewalls from interfaces
for int in $(show interfaces ethernet | \
awk '/eth[0-9]/ {print $1}'); \
do delete interfaces ethernet $int firewall; \
# Now delete all firewalls
for fwall in $(show firewall name | \
awk '/^ \w* {$/ {print $1}')
do delete firewall name $fwall; \

# Create firewalls found on primary
$(/opt/vyatta/sbin/vyatta-config-gen-sets.pl $TEMPFWRULES)
# Apply firewalls to interfaces as defined on primary
$(/opt/vyatta/sbin/vyatta-config-gen-sets.pl $TEMPINTCMDS | grep firewall)

# Force a tty for the ssh connection - Vyatta environment variables
# and special shell are only set up during an interactive login


...read more

June 5, 2009

Another ESX Server HTTP File Trick

While reading some docs on the vSphere CLI, I came across this note:

...you can browse datastore contents and host files using a Web browser. Connect to the following location:


You can view datacenter and datastore directories from this root URL...

I knew you could browse the datastores this way as there is a link from the main welcome page, but the /host URL is news to me. Log in with the root account, and it brings up a page titled Configuration files with links to view a bunch of important - you guessed it - configuration files.

Not too terribly interesting, but I can already see myself hitting this URL just to get the vSphere license key or double check that the proper entries are in an ESX server's hosts file.

...read more

May 29, 2009

Instantly Serve an ESX Directory via HTTP

python -m SimpleHTTPServer 9090

Before you type that in, understand that it's not going to work unless you've made some ill-advised changes to the Service Console firewall. Also, make sure you fully grasp the security risk you're about to take. This command will start up a simple web server on TCP port 9090 in the current working directory, allowing anyone to browse the files and subdirectories from a web browser under the security context of the user that executed the command. In other words, if you execute this as the root user, in the root directory, any file in the Service Console can be downloaded from a web browser.

This one-liner is extremely dangerous, but it is also extremely handy, and if used correctly in a properly designed environment, the potential risks can be managed. I use this all the time in my test lab to get output files from scripts by simply cd'ing to the script directory, running the above command, and pointing a web browser to http://IP_OF_ESX:9090 from the vCenter server.

How to make it safer:
  • The ESX Service Console network should be completely isolated from the LAN, and only vCenter servers and specific administrative workstations are allowed access

  • The Python command should be executed while the working directory is a folder created just for this purpose, and only contains the specific files you want to share and no subdirectories

  • The command should only be executed by a non-root user and the web server torn down as soon as the files have been downloaded by issuing a Ctrl-C

  • The root user must open a specific port in the firewall prior to using the command; for example, to open TCP port 9090:

    esxcfg-firewall --openPort 9090,tcp,in,SimpleHTTP

  • The port should then be closed immediately after the needed files have been downloaded; for example, to close down the previous command:

    esxcfg-firewall --closePort 9090,tcp,in

This also works in ESX 3.5, but the version of Python in the Service Console lacks the -m option, so the path to SimpleHTTPServer.py must be specified:

# ESX 3.5
python /usr/lib/python2.2/SimpleHTTPServer.py 9090

Might be too dangerous for production, so consider the risks carefully. But for testing, it can be really handy.

...read more