Discover the value: VMware Health Check from a VCP

By Tom McDonald | Apr 29, 2011 11:14:00 AM
 

With a VMware vSphere Health Check, one of our VMware Certified Professional consultants (VCPs) will work with your IT team and assist them with configuration and management of VMware vSphere by providing knowledge and guidance on best practices. If you're running the latest in VMware software, it is important that you are getting the most out of your environment. By working closely with your IT department our VCP will be able to provide concrete recommendations that will optimize your virtual IT infrastructure.

WHY THIS MATTERS:  Over time, adding new VM's and changes/upgrades to your virtual environment alters the efficiency. Having a VMware Health Check ensures you’re not over/under utilizing resources and your environment is staying within VMware’s best practices guidelines. Its a good idea to have a VCP check your environment every 6 to 12 months or a couple months after any major upgrade or change to the infrastructure. This ensures your infrastructure is well maintained and that any problems are realized before they require a major overhaul.

Read More >

3 Ways to go Green with IT

By Tom McDonald | Apr 22, 2011 2:29:00 PM

Upgrading your computer

Everyone likes upgrading their PC because it means they can now use a faster computer with more features, but it’s also a great way to save money on electricity costs while going green. As technology advances so does the techniques used to save power. Anyone who had a laptop a decade ago remembers the problems with heat, size and horrible battery life. Nowadays these problems are barely a concern with laptop battery life being at minimal 3-4 hours, but generally can go up to 10 hours or beyond. New breakthroughs in battery technology have helped, but it has been the tech industry as a whole that has increased battery life. As new CPUs and Memory chips are being created, one of the main goals is to make sure the next generation runs faster, but also uses less electricity and generates less heat. This is done through new techniques created to create smaller transistors, which allows more to be placed on a single chip, and less electricity to be needed to use them. This combined with new features that keep energy consumption in mind have allows computers to lower their speeds when idle to decrease and consume less power, but can increase speed again when needed.

Read More >

Comparison between traditional IT BC plan and an VMware implementation

By Tom McDonald | Apr 15, 2011 12:17:00 PM

Many business’s IT infrastructures are based around this set up, with the operating system bound to a specific set of hardware and a specific Application bound to that OS. From there the server runs at about 5-10% of its capacity for most of the day with it peaking only during heavy usage. The data has to be backed up to a local SAN for recovery purposes, generally needing special software to be employed to ensure its being backed up fully and efficiently.

If this is a vital server and has a disaster recovery and business continuity plan implemented with it to ensure that downtime is kept as low as possible, then it will have an identical server installed for failover. This server is only used if the original server fails, but is still uses power and space. Not only that, but this server has to be the same identical model, containing the same hardware configuration, firmware, and local storage to ensure immediate complete compatibility with the original server. This adds cost as you need to have a second set of the hardware and it has to be that same model, limiting upgrade paths for the business.

This set up generally falls into the “Boot and Pray” model of disaster recovery, as the complexity of the set up causes the admin to hope that it works rather than being able to guarantee a smooth transition from server. This has to be done with every vital server that needs to have a redundant back up and each one has its own unique set up, creating a large amount of complexity that is involved with managing all these different machines. This complexity increases the company’s RTO and RPO and makes recovering a much larger ordeal.

Read More >

5 ways a VDI, Virtualized Desktop Infrastructure, can improve IT for both users and admins

By Tom McDonald | Mar 28, 2011 3:14:00 PM

The benefits of virtualizing your desktop environment are numerous, in today’s world business’s IT departments are growing by leaps and bounds and the work needed to add, integrate, and maintain can push IT resources to the limits. Virtualization was traditionally used to help reduce the number of servers needed to run the IT, but as the software became more advanced, the usefulness of having a Virtualized Desktop infrastructure (VDI) has become more apparent.

Read More >

Downtime not an option? Learn the basics of VMware's Fault Tolerance and what you will need to get up and running

By Tom McDonald | Mar 25, 2011 11:32:00 AM

Is a server crash not an option for your company? Is having your server up and running the life and soul of your business? Then you may want to consider VMware’s Fault Tolerance (FT) feature. VMware Fault Tolerance is a step up from VMware High Availability (HA), with High Availability being VMware’s backup for a VM crash, if a server running a VM happens to go down then the host reboots on a different host. This allows for only a minute or two of downtime as the Virtual Machine starts up on a new server and the primary host that has crashed is restarted, if possible. This is extremely useful and can keep a business functioning with only a moment of downtime. What Fault Tolerance does is eliminate that couple minutes of downtime so that even if a server crashes, nothing is felt by the user. This feature gives companies that can’t stop functioning, even for a minute, the security they need to run their businesses.

How does FT work? Well with HA there is a primary server who runs the VM and a dedicated secondary host that is there in case of failure, if/when that failure occurs the secondary host is started and the VM is restarted on the new host. The failure is detected by using VMware’s heartbeat function that pings the server every second to ensure it is still active on the network, if the host stops responding it is considered to have failed and the VMs are moved to a new machine.  FT continues this trend, but instead of waiting for a host to fail and then restart it uses vLockstep to keep both hosts in sync that way if one was to fail than the other would continue running without having the user notice the server failure. By sharing a virtualized storage, all the files are accessible to both hosts and the primary host updates the secondary host constantly in order to keep both hosts RAM in sync. FT has a few rules to ensure it works properly:

  • Hosts must be in an HA cluster
  • Primary and secondary VMs must run on different hosts
  • Anti- affinity must be enabled (A configuration that ensures that the VM cannot be started on the same host)
  • The VMs must be stored on a shared storage
  • Minimum of 2 Gbps Nics, this is to allow vMotion and FT logging
  • Additional NICs for VM and management network traffic
Read More >