Nice to see you, to see you nice!
It’s been a busy few months for me – the last post (linked here) was over 4 months ago at the middle of July. A lot of projects, work and training has happened, as well as a few events which help steer me for the future.
Just before that post, I graduated with a First Class Honours degree in Computer Science from the University of Birmingham, which was a great moment. Now, I’m proudly studying for a Masters degree in Computer Security. There have been a lot of developments recently, mainly linked to the misjudged coding which would come back to haunt the people after a few years or in some cases, over 10 years, as seen in the OpenSSL library. Even though the code _should_ have been peer-reviewed, these mistakes happen, but have serious consequences, especially the ‘Shellshock’ issue due to a vulnerability in BASH, allowing remote execution of code. Not nice in the slightest.
More on that later
After graduating, I was kindly offered to work full-time in my current job, working alongside a fantastic team during the daytime. Whilst it was a shock to the system, getting a train in at 7am in the morning and not getting home until 7pm at night if I missed the ‘other’ train by a minute, I was doing something I enjoy; helping people and working my knowledge of IT within the business to further my skills and ability. During this time, I was able to undertake a project of my own to design and implement a Disaster Recovery-focussed Virtual Infrastructure for my team, to allow continuity of service if there was ever an issue, such as network loss or power loss in a certain part of the campus.
Now, I’m sure, just like anyone else in IT, you’ve played around with Virtual Machines, possibly through Parallels (my personal poison of choice), VirtualBox (another great alternative, but clunky at times) or VMWare Player/Workstation. The VI was meant to be a dedicated machine just for virtualisation. Just to give you some background on what I had to work with:
- Dell OptiPlex 780 Small Form Factor Desktop
- 160GB Hard Disk Drive
- 6GB of DDR3 RAM (lovingly scavenged from a few machines destined for WEEE waste by a colleague)
- ~2.8GHz Intel Core 2 Duo CPU (exact specifications unknown)
- 1 Gigabit Intel Network Interface
- VMWare ESXi v5.5u1
Sweet, eh? With this, I managed to get a 500GB HDD from home to help out capacity, given the VMs I was going to setup use Windows AIK and WDS for deployment of a ‘Gold Image’ which a) needs to be downloaded onto the HDD and then b) expanded onto the HDD and registered.
So, with this, I realised that under the current circumstances, I’d need to allocate an IP address per VM to work, since each one would be setup to allow RDP connections over our network. Easier said than done, in an institution with its own Class B range, where on some subnets, a restricted number of free IPs are available. My subnet was looking OK for free IP addresses, but I worked out that as a service, it should be frugal on resources, given it might need to be relocated to a different subnet in a matter of minutes to keep our service running. To get around this, I had a rummage around and found PFsense, a FreeBSD-based Firewall appliance which could be used with ESXi, and also act as a second line of defence. Memory-wise, I was surprised, given it can run on just 384MB of RAM, and 6GB of HDD space, with plenty to spare! Using PFsense meant I could have the firewall facing the Campus Network, whilst the VMs would be protected by it, using a separate virtual switch connected to the internal side of the firewall. Setup and configuration was simple, and I got the firewall up in a matter of minutes, and in a state that would be useful. Port forwarding once our 4 Windows 7 VMs were up and running was simple, with myself taking some caution by assigning ‘sticky-IP’s’ to the VMs to be sure that the forwarding would always work.
Now over to the VMs. I setup a bog-standard Debian 7 VM with LXDE (my choice of distribution and desktop environment) just to test everything worked, and from there, handed over the environment to the team responsible for IT Support for my IT Team. They managed (with my help) to get the 4 VMs on which were essentially clones of our physical machines, with application personalisation and SMIME certificates available upon login.
Here comes the good part!
When the machine was packaged to go over to its resting place, away from where we are (what good DR plan has the recovery system in the same physical location as you?!), it was put in place, and booted. Lo, and behold, we couldn’t remote in via the Citrix system we use to get through a firewall protecting that subnet. I managed to RDP into a colleague’s machine over in that building, and everything looked fine – ESXi was up, getting its IP address via BOOTP, but no traffic was leaving the PFsense VM. I purposely hard-coded the IP address on the PFsense router, given its MAC address was registered on two subnets, and the Gateway IP was that of the subnet router. So what was wrong? Had the NIC gone flukey once moved over? In reality, when I set up the DHCP static IPs in PFsense, they were setup to use the default gateway. Even though the IP address was removed from our subnet, any machines (including PFsense) were trying to use the default gateway, which was on the other side of campus. Under the gateway menu of PFsense, it had two gateways, the old one and the one it should use. After removing the old, stagnant entry, and rebooting the VMs, everything came up cleanly and just worked!
Just as well, given we had an unexpected network outage in our building for a short while and we were able to use these VMs to carry out our work, such as password resets, call logging, and call management through our call-centre management software.
Part Two of the Story
That project was something I really enjoyed, and would happily undertake it again, and the solution will now be integrated into the central VI that we have, which is a bit more resilient to failure, and has VM backups nightly, so that we can keep going. The 780, what of it? It’s coming home, and will be nurtured into a sandbox environment we all can use in the office to test new OS’s and do things on these VMs we would not necessarily have been able to do on our own machines, such as hardware acceleration or power intensive applications, which would slow down both the host and the VM.