Disk Fragmentation, how it happens and what Defragging actually does

By Tom McDonald | Apr 1, 2011 9:15:00 AM

What Causes Fragmentation

As you use your computer fragmentation happens overtime, this is caused from adding and deleting files. What happens is when you delete a file the file isn’t actually delete it is just marked as ok to write over, so the next time windows needs to write a new file it just looks for the first available spot that is ok to write to and adds it there.

Read More >

What is RAM? A quick summary of what RAM is and how upgrading helps you

By Tom McDonald | Mar 30, 2011 2:05:00 PM

Most people have hear d the term RAM used before among computer users although many don’t know what exactly it is or does, they know that having more is probably useful to them. RAM literally stands for Random Access Memory and is used to fix the problem of reading data from Spinning Drives. Your computer writes/reads data from the hard drive but because traditional hard drives are spinning mechanical devices, it has the problem of being slow, so slow that the rest of the computer has to wait for the hard drive to finish before moving on to its next task. This creates a “bottleneck,” which is simply the slowest part of the computer, which forces the rest of the computer to slow down to that pace. Because hard drives could only go so fast they formed a way to get around this, RAM. What the RAM does is store files temporality on the chip, these files are ones that the OS needs to access constantly or needs quick access to at any given moment. By having this split between the Hard drive and RAM you create a place to store files permanently and another to store ones you need to access quickly, these files tend to be programs that are currently open, which is why it takes so long to load the program the first time, but only a few seconds to gain access to it after its loaded.

Read More >

5 ways a VDI, Virtualized Desktop Infrastructure, can improve IT for both users and admins

By Tom McDonald | Mar 28, 2011 3:14:00 PM

The benefits of virtualizing your desktop environment are numerous, in today’s world business’s IT departments are growing by leaps and bounds and the work needed to add, integrate, and maintain can push IT resources to the limits. Virtualization was traditionally used to help reduce the number of servers needed to run the IT, but as the software became more advanced, the usefulness of having a Virtualized Desktop infrastructure (VDI) has become more apparent.

Read More >

Downtime not an option? Learn the basics of VMware's Fault Tolerance and what you will need to get up and running

By Tom McDonald | Mar 25, 2011 11:32:00 AM

Is a server crash not an option for your company? Is having your server up and running the life and soul of your business? Then you may want to consider VMware’s Fault Tolerance (FT) feature. VMware Fault Tolerance is a step up from VMware High Availability (HA), with High Availability being VMware’s backup for a VM crash, if a server running a VM happens to go down then the host reboots on a different host. This allows for only a minute or two of downtime as the Virtual Machine starts up on a new server and the primary host that has crashed is restarted, if possible. This is extremely useful and can keep a business functioning with only a moment of downtime. What Fault Tolerance does is eliminate that couple minutes of downtime so that even if a server crashes, nothing is felt by the user. This feature gives companies that can’t stop functioning, even for a minute, the security they need to run their businesses.

How does FT work? Well with HA there is a primary server who runs the VM and a dedicated secondary host that is there in case of failure, if/when that failure occurs the secondary host is started and the VM is restarted on the new host. The failure is detected by using VMware’s heartbeat function that pings the server every second to ensure it is still active on the network, if the host stops responding it is considered to have failed and the VMs are moved to a new machine.  FT continues this trend, but instead of waiting for a host to fail and then restart it uses vLockstep to keep both hosts in sync that way if one was to fail than the other would continue running without having the user notice the server failure. By sharing a virtualized storage, all the files are accessible to both hosts and the primary host updates the secondary host constantly in order to keep both hosts RAM in sync. FT has a few rules to ensure it works properly:

  • Hosts must be in an HA cluster
  • Primary and secondary VMs must run on different hosts
  • Anti- affinity must be enabled (A configuration that ensures that the VM cannot be started on the same host)
  • The VMs must be stored on a shared storage
  • Minimum of 2 Gbps Nics, this is to allow vMotion and FT logging
  • Additional NICs for VM and management network traffic
Read More >