VMotion over Distance with VPLEX

Vmotion without VPLEX

  • Cannot directly perform Vmotion since storage is not shared
  • Must first perform storage Vmotion

Vmotion with VPLEX

  • Enables direct Vmotion between data centers
  • Storage Vmotion is no longer required
  • Replicate the data once then move the VMs at will

Use Cases

  • Data Center Load Balancing
    • Optimize resources across several data centers
  • Disaster Avoidance and Data Center Maintenance
    • Evacuate data center ahead of a probable disaster
    • Move applicatoin to remote data center to perform maintenance on local data center
  • Zero-downtime Data Center moves
    • Move VMs and data to new data center then decommission old data center

Three Basic Configurations

  • Common configurations
    • Maximum supported distance, 100km (with 5ms latency)
    • ESX hosts in both data centers have common IP subnets (stretched layer 2 network)
    • ESX servers can participate in local HA and DRS-enabled clusters
  • VMFS volume built on a VPLEX distributed device
  • VMFS volume is then shared between ESX servers in two locations
  • Scenario 1 (distributed device)
    • Best practice
    • Continuous data protection and transparently protects against storage failures in either location
    • Continuous IO on biased cluster after WAN link failure
    • Continuous IO on biased cluster after non-biased site failure
    • Suspend IO on non-biased cluster after biased site failure
  • Scenario 2 (built on remote device)
    • Not highly available, only good for temporary use when VM must move immediately
  • Scenario 3 (temporary distributed device)
    • Storage Vmotion to a distribute device while in transit to the remote site
    • Then Storage Vmotion back to local storage in the remote site
    • Do this to regain some array functionality that VPLEX might not have

Failure Cases

  • N+1 configuration handles director failures transparently
  • Any WAN or remote cluster failure while Vmotion is in progress simply results in Vmotion being aborted

Rule-set Best Practices

  • Manage your rule-sets very carefully
    • Be aware of which cluster will win in the event of a failure
  • Place related VMs on the same data store so that they will move together
  • For any given data store, move all VMs at the same time
  • For the most critical applications, dedicate a data store to the VM
Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s