VMware vSphare Resource utilization
What is vMotion-
vMotion is a feature that
allows running VMs to be migrated from one physical ESXi host to another
physical ESXi host with no downtime to end users.
How
VMware vSphere helps balance the utilization of resources-
vMotion:- vMotion, which is generically known as live migration, is used to manually balance resource utilization between two or more ESXi hosts.
Storage vMotion:- Storage vMotion is the storage equivalent of
vMotion, and it is used to manually balance storage utilization
between two or more datastores.
vSphere Distributed Resource
Scheduler (DRS):- vSphere Distributed Resource Scheduler (DRS) is used to automatically balance
resource utilization among two or more ESXi hosts.
Storage DRS:- Storage DRS is the storage equivalent of DRS,
and it is used to automatically
balance storage utilization among two or more datastores.
What
are the configuration requirements of a successful vMotion-
ESXi hosts requirements for
vMotion:
Shared storage:-
è Shared storage for the VM files (a VMFS
or NFS datastore).
Dedicated VMkernel port for
vMotion:-
è A Gigabit Ethernet or faster network interface
card (NIC) with a VMkernel port defined and enabled for vMotion on each ESXi
host.
Describe briefly how vMotion works-
1. Migration initiated:- 2. Active memory
pages of source VM precopied:- 3. Source ESXi host is 'quiesced' for memory bitmap copy 4.
Memory bitmap address contains copied:- 5. VM starts on target host:- 6. RARP message is sent 7. Source host memory is deleted:-
1. Migration initiated:- An administrator initiates a migration of a
running VM (VM1) from one ESXi host (HOST-1) to another ESXi host (HOST-2).
2. Active memory pages
of source VM precopied:- As the active memory address is copied from the source host to the target
host, pages in memory could be changed because VM is still servicing. ESXi
handles this by keeping a log of changes that occur in the memory of the VM on
the source host. This log is called a memory bitmap.
3. Source ESXi host is 'quiesced' for memory
bitmap copy:- VM1 on the source ESXi host (HOST-1) is quiesced. This means that it is still in memory but is no longer
servicing client requests for data. The memory bitmap file is then transferred
to the target (HOST-2). As source VM is quiesced memory does not change.
Remembar-The Memory
Bitmap
[The memory bitmap does not
include the contents of the memory address that has changed; it simply includes
the addresses of the memory that has changed — often referred to as the dirty memory.]
4. Memory bitmap address
contains copied:-The
target host (HOST-2) reads the addresses in the memory bitmap file and requests
the contents of those addresses from the source (HOST-1).
5. VM starts on target host:- After copy VM starts on that target host. Note
that this is not a reboot — the VM’s state is in RAM, so the host simply
enables it.
6. RARP message is sent:- A Reverse Address Resolution Protocol (RARP)
message is sent by the target host to register its MAC address against the
physical switch port to which it is connected. Now physical switch sends
network packets to target ESXi host instead of source.
7. Source host memory is deleted:-
What
are the points that you should keep in mind for a successful vMotion-
Networking
è Identical VMkernel ports, virtual switches, or
same Distributed Switch :-
è Identical port group and same subnet or vlan:-
CPU
è Processors compatibility:-
# Same CPU vendor :- (Intel or AMD).
# Same CPU family:- (Xeon 55xx, Xeon 56xx, or Opteron).
# Same CPU features:- the presence of SSE2, SSE3, and SSE4, and NX
or XD.
# Virtualization enabled:- For 64-bit VMs, CPUs must have virtualization
technology enabled (Intel VT or AMD-v).
Host and VM
èNo Device physically available to only one host:- The VM must not be connected to any device
physically available to only one ESXi host. This includes disk storage, CD/DVD
drives, floppy drives, serial ports, or parallel ports.
è No internal-only vSwitch:-
è No CPU affinity Rule:-
è Shared Sorage for hosts:-
How vMotion provide High Availability futures-
vMotion can prevent planned downtime by live migration of
VMs but it is not a high-availability feature. So it cannot protect against any
unplanned host failure.
What
is virtual machine CPU masking -
Ability to create custom CPU
masks for CPU features on a per-VM basis. It’s also important to note that,
with one exception, this custom CPU masks for easy vMotion compatibility is completely
unsupported by VMware.
What is the one exception?-On a per-VM basis, show or mask the No Execute / Execute Disable (NX/XD) bit in the host CPU. This
specific instance of CPU masking for easy vMotion compatibility is fully
supported by VMware.
What is NX/XD bit-
AMD’s Execute Disable (XD)
and Intel’s NoExecute (NX) are features of processors that mark memory pages as
data only, which prevents a virus from running executable code at that memory
address. Windows 2003 SP1 and Windows XP SP2 support this CPU feature.
A certain vendor has just released a series of patches for some of the guest OS’s in your virtualized infrastructure. You request an outage window from your supervisor, but your supervisor says “just use vMotion to prevent downtime”. Is your supervisor correct? Why or why not?
Your supervisor is incorrect.
vMotion can be used to move running VMs from one physical host to another, but
it does not address outages within a guest OS because of reboots or other
malfunctions
Is
vMotion a solution to prevent unplanned downtime-
No. vMotion is a solution to
address planned downtime of the ESXi hosts by manually moving live VMs to other
host.
What
is EVC-
EVC:- vMotion requires compatible CPU families
on both source and the destination ESXi hosts. For that vSphere offers “Enhanced
vMotion Compatibility (EVC)”. This can mask differences between CPU families in order to maintain successful vMotion.
Can
you change the EVC level for a cluster while there are VMs running on hosts in
the cluster -
No, you cannot change the EVC
level when VMs are running on the host. New EVC level means new CPU masks must
be calculated and applied. CPU masks can be applied only when VMs are powered
off.
Describe in details what is VMware Enhanced vMotion
Compatibility (EVC)-
Recognizing that potential
processor compatibility issues with vMotion could be a significant problem,
VMware worked closely with both Intel and AMD to craft functionality that would
address this issue. On the hardware side, Intel and AMD put functions in their
CPUs that would allow them to modify the CPU ID value returned by the CPUs.
Intel calls this functionality FlexMigration; AMD simply embedded this
functionality into their existing AMD-V virtualization extensions. On the
software side, VMware created software features that would take advantage of
this hardware functionality to create a common CPU ID baseline for all the
servers within a cluster. This functionality, originally introduced in VMware
ESX/ESXi 3.5 Update 2, is called VMware Enhanced vMotion Compatibility.
vCenter Server performs some
validation checks to ensure that the physical hardware included in the cluster
is capable of supporting the selected EVC mode and processor baseline. If you
select a setting that the hardware cannot support, the Change EVC Mode dialog
box will reflect the incompatibility.
When you enable EVC and set
the processor baseline, vCenter Server then calculates the correct CPU masks
that are required and communicates that information to the ESXi hosts. The ESXi
hypervisor then works with the underlying Intel or AMD processors to create the
correct CPU ID values that would match the correct CPU mask. When vCenter
Server validates vMotion compatibility by checking CPU compatibility, the
underlying CPUs will return compatible CPU masks and CPU ID values. However,
vCenter Server and ESXi cannot set CPU masks for VMs that are currently powered
on.
When setting the EVC mode for
a cluster, keep in mind that some CPU-specific features — such as newer
multimedia extensions or encryption instructions, for example — could be
disabled when vCenter Server and ESXi disable them via EVC. VMs that rely on
these advanced extensions might be affected by EVC, so be sure that your
workloads won’t be adversely affected before setting the cluster’s EVC mode.
What
is vSphare DRS-
The ESXi hosts groups are
called clusters. vSphere Distributed Resource Scheduler enables vCenter Server
to automate the process of conducting vMotion migrations of VMs to help balance
the load across ESXi hosts within a cluster. DRS can be as automated as
desired, and vCenter Server has flexible controls about the behavior of DRS as
well as the behavior of specific VMs within a DRS-enabled cluster.
What are the function of DRS-
DRS has the following two
main functions:
Intelligent placement:-
l To decide which node of a cluster should
run a VM when it’s powered on, a function often referred to as intelligent placement.
Recommendation or
Automation:-
l To evaluate the load on the cluster over
time and either make recommendations for migrations or use vMotion to
automatically move VMs to create a more balanced cluster workload.
How
DRS works?
@ vSphere DRS runs as a process within vCenter Server, which means that you
must have vCenter Server in order to use vSphere DRS.
@ By default, DRS checks
every five minutes (or 300 seconds) to see if the cluster’s workload is balanced.
@ DRS is also invoked by
certain actions within the cluster, such as adding or removing an ESXi
host or changing the resource settings of a VM (such as reservations, shares, and limits).
@ When DRS is invoked, it
will calculate the imbalance of the cluster, apply any resource controls (such
as reservations, shares, and limits), and, if necessary, generate recommendations for migrations of VMs within the cluster.
@ Depending on the
configuration of vSphere DRS, these recommendations could be applied automatically,
meaning that VMs will automatically be migrated between hosts by DRS in order
to maintain cluster balance (or, put another way, to minimize cluster
imbalance).
@ Fortunately, if you can
retain control that how aggressively DRS will automatically move VMs around the
cluster.
What
are DRS automation level-
(i)Manual (ii) Partially
Automated (iii) Fully Automated
Manual-Initial VM placement and VM migrations both are
manual.
Partially Automated:-Initial VM placement is automated, but VM
migrations are still manual.
Fully Automated:- Initial VM placement and VM migrations both
are automated.
DRS takes Automatic vMotion
decisions based on the selected automation level (the slider bar).
There are five positions for
the slider bar on the Fully Automated setting of the DRS cluster. The values of
the slider bar range from Conservative to Aggressive and automate 5 priority recommendations
accordingly.
You
want to take advantage of vSphere DRS to provide some load balancing of virtual
workloads (VMs) within your environment. However, because of business
constraints, you have a few workloads that should not be automatically moved to
other hosts using vMotion. Can you use DRS? If so, how can you prevent these
specific workloads from being affected by DRS?
Yes, you can use DRS. Enable
DRS on the cluster, and set the DRS automation level appropriately. For those
VMs that should not be automatically migrated by DRS, configure the DRS
automation level on a per-VM basis to Manual. This will allow DRS to make recommendations on migrations
for these specific workloads (VMs) but will not actually perform the
migrations.
What
is maintenance mode?
Maintenance mode is a setting
on a ESXi host that prevents the ESXi host from performing any VM related
functions. VMs
currently running on a ESXi host being put into maintenance mode must be shut
down or moved to another host before the ESXi host will actually enter
maintenance mode. This means that an ESXi host in a DRS-enabled cluster will
automatically generate priority 1 recommendations to migrate all VMs to other
hosts within the cluster.
What
is Distributed Resource Scheduler (DRS) Rules or affinity rules-
vSphere DRS supports three
types of DRS rules:-
[1] VM affinity rules- “Keep
Virtual Machines Together” -Keeps VMs together on the same ESXi host
[2] VM anti-affinity
rules- “Separate Virtual Machines” - Keeps VMs seperate on the different ESXi host.
[3] Host affinity rules-
“Virtual Machines To Hosts” - Keeps VM DRS group and Host DRS group together or separate.
The host affinity rule brings
together a VM DRS group and a host DRS group according to preferred rule
behavior. There are four host affinity rule behaviors:
l Must Run On Hosts In Group
l Should Run On Hosts In Group
l Must Not Run On Hosts In Group
l Should Not Run On Hosts In Group
What
is per-VM Distributed Resource Scheduler Settings-
The administrator can
then selectively choose VMs that are not going to be acted on by DRS in the
same way as the rest of the VMs in the cluster. However, the VMs should remain
in the cluster to take advantage of high-availability features provided by
vSphere HA.
The per-vM automation
levels available include the following:
l Manual
(Manual intelligent placement and vMotion)
l Partially
Automated (automatic intelligent placement, manual vMotion)
l Fully
Automated (automatic intelligent placement and vMotion)
l Default
(inherited from the cluster setting)
l Disabled
(vMotion disabled)
Storage vMotion :
What
is Storage vMotion?
vMotion and Storage vMotion
are like two sides of the same coin. Storage vMotion, migrates a running
VM’s virtual disks from one datastore to another datastore but leaves the VM
executing — and therefore using CPU and memory resources — on the same ESXi host.
This allows you to manually balance the “load” or storage utilization of a
datastore by shifting a VM’s storage from one datastore to another. Like
vMotion, Storage vMotion is a live migration.
How
Storage vMotion works?
1. Nonvolatile files
copy:- 2. Ghost
or shadow VM created on destination datastore:- 3. Destination disk
and mirror driver created:- 4. Single-pass copy of the virtual
disk(s):- 5. vSphere quickly suspends and resumes in
order to transfer control over to the ghost VM:- Source datastore files
are deleted:-
1. Nonvolatile files
copy:- First,
vSphere copies over the nonvolatile files that makes up a VM: Ex- the
configuration file (VMX), VMkernel swap file, (.SAWP) log files, and snapshots.
2. Ghost or shadow VM created on
destination datastore:- Next, vSphere starts a ghost or shadow VM on
the destination datastore using the nonvolatile files copied. Because this
ghost VM does not yet have a virtual disk (that hasn’t been copied over yet),
it sits idle waiting for its virtual disk.
3. Destination disk and
mirror driver created:- Storage vMotion first creates the destination disk. Then a mirror
device — a new driver that mirrors I/Os between the source and destination disk
— is inserted into the data path between the VM and the underlying storage.
4. Single-pass copy of
the virtual disk(s):- With
the I/O mirroring driver in place, vSphere makes a single-pass copy of the
virtual disk(s) from the source to the destination. As changes are made to the
source, the I/O mirror driver ensures those changes are also reflected at the
destination.
5. vSphere quickly suspends and resumes
in order to transfer control over to the ghost VM:- When the
virtual disk copy is complete, vSphere quickly suspends and resumes in order to
transfer control over to the ghost VM created on the destination datastore
earlier. This generally happens so quickly that there is no disruption of
service, like with vMotion.
6. Source datastore files are
deleted:- The files on the source datastore are deleted. It’s important
to note that the original files aren’t deleted until it’s confirmed that the
migration was successful; this allows vSphere to simply fall back to its
original location if an error occurs. This helps prevent data loss situations
or VM outages because of an error during the Storage vMotion process.
What
we should remember when using Storage vMotion with Raw Device Mappings (RDM)?
There are two type of Raw
Device Mappings (RDM’s) - physical mode RDM and virtual mode RDM. Virtual mode
RDM use one VMDK mapping file to give raw LUN access. Be careful when using
Storage vMotion with virtual mode (RDMs).
If you want to migrate only
the VMDK mapping file, be sure to select “Same Format As Source” for the
virtual disk format. If you select a different format, virtual mode RDMs will
be converted into VMDKs as part of the Storage vMotion operation (physical mode
RDMs are not affected). Once an RDM has been converted into a VMDK, it cannot
be converted back into an RDM again.
Storage DRS :
What
is Storage DRS?
<> Storage DRS is a
feature that is new to vSphere 5.
<>Storage DRS brings automation
to the process of balancing storage capacity and I/O utilization.
<>Storage DRS uses
datastore clusters and can operate in manual or Fully Automated mode.
<>Numerous
customizations exist — such as custom schedules, VM and VMDK anti-affinity rules,
and threshold settings etc.
<>SDRS can perform this
automated storage balancing not only on the basis of space utilization but also
on the basis of I/O load balancing.
Similarities between DRS
and SDRS-
l Just as vSphere DRS uses clusters as a collection of hosts on which to act,
SDRS uses data store clusters as collections
of datastores on which it acts.
l Just as vSphere DRS can perform both
initial placement and manual and automatic
balancing, SDRS also performs initial
placement of VMDKs and manual or
automatic balancing of VMDKs.
l Just as vSphere DRS offers affinity and anti-affinity rules to
influence recommendations, SDRS offers VMDK
affinity and anti-affinity functionality.
Guidelines for datastores cluster-
VMware provides the following
guidelines for datastores that are combined into datastore clusters:
(A)No NFS
and VMFS combination:-(B) No replicated and nonreplicated datastore combination:- (C) No ESX/ESXi
4.x and earlier host connection with ESXi 5 datastore:-(D) No Datastores shared across multiple
datacenters:- (E) Datastores
of different sizes , I/O capacities and vendors are supported
l No NFS and VMFS combination:- Datastores
of different sizes and I/O capacities can be combined in a datastore cluster. Additionally,
datastores from different arrays and vendors can be combined into a datastore
cluster. However, you cannot combine NFS and VMFS datastores in a datastore
cluster.
l No replicated and nonreplicated
datastore combination:-
l No ESX/ESXi 4.x and
earlier host connection with ESXi 5 datastore:- All hosts attached to a
datastore in a datastore cluster must be running ESXi 5 or later. ESX/ESXi 4.x and
earlier cannot be connected to a datastore that you want to add to a datastore
cluster.
l No Datastores shared across
multiple datacenters:-
What
are the relations between Storage I/O Control and Storage DRS Latency
Thresholds-
You’ll note that the default
I/O latency threshold for SDRS (15 ms) is well below the default for SIOC (30
ms). The idea behind these default settings is that SDRS can make a migration
to balance the load (if fully automated) before throttling becomes necessary.
What
will happen if you put SDRS datastore in to maintenance mode-
When you enable SDRS
datastores in to maintenance mode, migration recommendations are generated for
registered VMs. However, SDRS datastore maintenance mode will not affect
templates, unregistered VMs, or ISOs stored on that datastore.
What are Storage DRS Automation levels-
SDRS offers two predefined
automation levels, No Automation (Manual Mode) and Fully Automated.
No Automation (Manual Mode):- Recommendations for initial placement as well
as recommendations for storage migrations based on the configured space and I/O
thresholds.
Fully Automated Mode:- Automatic initial placement as well as storage
migrations based on the configured space and I/O thresholds.
What
is Storage DRS Schedule-Custom schedules enable vSphere administrators to specify times when the
SDRS behavior should be different. For example, in your office SDRS runs in
automatic mode but there are specific times when SDRS should be running in No
Automation (Manual Mode). You can set times (Like night) when the space
utilization or I/O latency thresholds should be different.
What
is Storage DRS Rules?
(M)VMDK anti-affinity (N) VM
anti-affinity rules (O) Keep VMDKs
Together
Just as vSphere DRS has
affinity and anti-affinity rules, SDRS offers vSphere administrators the
ability to create VMDK anti-affinity and VM anti-affinity rules. These rules
modify the behavior of SDRS to ensure that specific VMDKs are always kept
separate (VMDK anti-affinity rule) or that all the virtual disks from certain
VMs are kept separate (VM anti-affinity rule).
Administrators can use
anti-affinity rules to keep VMs or VMDKs on separate datastores, but as you’ve
already seen, there is no way to create affinity rules. Instead of requiring
you to create affinity rules to keep the virtual disks for a VM together,
vSphere offers a simple check box in the Virtual Machine Settings area of the
datastore cluster properties.
To configure Storage DRS to
keep all disks for a VM together, check the boxes in the Keep VMDKs Together
column.
Name
the two ways in which an administrator are notified that a Storage DRS
recommendation has been generated-
First, an alarm is generated to note that an SDRS
recommendation is present. You can view this alarm on the "Alarms" tab of the datastore cluster in
"Datastores And Datastore Clusters" inventory view.
Second,In addition, the "Storage DRS" tab of the datastore cluster (visible in
"Datastores And Datastore Clusters" inventory view) will list the
current SDRS recommendations and give you the option to apply those recommendations.
What
is a potential disadvantage of using drag and drop to add a datastore to a
datastore cluster-
You can use drag and drop to
add a datastore to an existing datastore cluster as well. Please note, that
drag and drop won’t warn you that you’re adding a datastore that doesn’t have
connections to all the hosts that are currently connected to the datastore
cluster. So when using SDRS some host may find that a particular datastore is
unreachable. To avoid this situation you should always use the "Add
Storage" dialog box.
[When using drag and drop to
add a datastore to a datastore cluster, the user is not notified if the
datastore isn’t accessible to all the hosts that are currently connected to the
datastore cluster. This introduces the possibility that one or more ESXi hosts
could be “stranded” from a VM’s virtual disks if Storage DRS migrates them onto
a datastore that is not accessible from that host.]
A
fellow administrator is trying to migrate a VM to a different datastore and a
different host, but the option is disabled (grayed out). Why-
Storage vMotion, like
vMotion, can operate while a VM is running. However, in order to migrate a VM
to both a new datastore and a new host, the VM must be powered off. VMs that
are powered on can only be migrated using Storage vMotion or vMotion, but not
both.
Name
two features of Storage vMotion that would help administrators cope with
storage related changes in their vSphere environment-
Old to new storage array èStorage vMotion can be used to facilitate
no-downtime storage migrations from one type of storage array to a new or new
type of storage array.
Between different types of storage èStorage vMotion can also migrate between
different types of storage (FC to NFS, iSCSI to FC or FCoE).
Between different types of VMDKèMigration between different type
of VMDK format (Thick, Thin):-
No comments:
Post a Comment