VMware vSphare Resource Allocation
Describe
Reservation Limit and share-
Reservations:- Reservations serve
to act as guarantees of a
particular resource.
Limits:- Limits are, quite
simply, a way to restrict the
amount of a given resource that a VM can use. Limits enforce an upper
ceiling on the usage of resource.
Shares:- Shares serve to
establish priority. Shares apply only during periods of host resource
contention and serve to establish prioritized access to host resource.
Reservation-guaranted
resource,
Limit-Upper
lmit of given resource,
Share-Prioritize
resource access in time of contension
What are the advance memory
management techniques used by vSphare-
i) Transparent page
sharing (TPS) ii) Ballooning iii) Swapping iv) Memory compression?
What
is Transparent Page Sharing (TPS)- identical memory pages
are shared among VMs to reduce the total number of memory pages needed. The
hypervisor computes hashes of the contents of memory pages to identify pages
that contain identical memory.
What
is memory ballooning-
è'Ballooning' involves the use of a driver —called
the balloon driver — installed into the guest OS.
èThis
driver is part of VMware
Tools installation.
è
The balloon driver can respond to commands from the hypervisor to reclaim memory
from that particular guest OS.
èThe
balloon driver request memory from the guest OS — a process calling 'inflating'—and
then passing that memory back to the hypervisor for use by other VMs.
èIf guest OS can
give up pages it is no longer using then there may not be any performance degradation
on the applications running inside that guest OS.
èIf
the guest OS is already under memory pressure — then “inflating” the balloon driver will invoke
guest OS paging (or Guest swapping), which will degrade performance.
Describe
briefly how Balloon driver works-
èThe
balloon driver is part of the VMware Tools.
è>As
such, it is a guest OS– specific driver, meaning that Linux VMs would have a
Linux-based balloon driver, Windows VMs would have a Windows-based balloon
driver, and so forth.
èRegardless
of the guest OS, the balloon driver works in the same fashion. When the ESXi
host is running low on physical memory, the hypervisor will signal the balloon
driver to grow. To do this, the balloon driver will request memory from the
guest OS. This causes the balloon driver’s memory footprint to grow, or to inflate. The memory that is
granted to the balloon driver is then passed back to the hypervisor. The
hypervisor can use these memory pages to supply memory for other VMs, reducing
the need to swap and minimizing the performance impact of the memory
constraints.
èWhen
the memory pressure on the Esxi host passes, the balloon driver will deflate, or return memory to
the guest OS.
The key advantage that
ESXi gains from using a guest-OS-specific balloon driver in this fashion is
that it allows the guest OS to make the decision about which pages can be given
to the balloon driver process (and thus released to the hypervisor). In some
cases, the inflation of the balloon driver can release memory back to the
hypervisor without any degradation of VM performance because the guest OS is
able to give the balloon driver unused or idle pages.
What
is hypervisor swapping?
èGuest
OS swapping falls strictly under the control of the guest OS and is not
controlled by the hypervisor.
èIf
none of the previously described technologies trim guest OS memory usage
enough, the ESXi host will swap memory pages out to disk in order to reclaim
memory that is needed elsewhere.
è
ESXi’s swapping takes place without any regard to the guest OS. As a result
guest OS performance is severely impacted if hypervisor swapping is invoked.
What
is memory compression-?
When an ESXi host gets to
the point that hypervisor swapping is necessary, then VMkernel will attempt to
compress memory pages and keep them in RAM in a compressed
memory cache. Pages that can be successfully compressed by at
least 50 percent are put
into the compressed memory cache instead of being written to disk and can then
be recovered much more quickly if the guest OS needs that memory page. Memory
Compression is invoked only when the ESXi host reaches to the point where
swapping is needed.
From
where a VM without Reservation gets is memory from-
What
is VMkarnel swap-
When an ESXi host doesn’t
have enough RAM available to satisfy the memory needs of the VMs it is hosting
and when other technologies such as transparent page sharing, the balloon
driver, and memory compression aren’t enough, then VMkernel is forced to page
some of each VM’s memory out to the individual VM’s VMkernel swap file.
VMkarnel
swap:-
VMkernel swap is actually
the hypervisor swapping mechanism. VMkernel swap is implemented as a file with
a .vswp extension that is created when a VM is powered on. These per-VM swap
files created by the VMkernel reside, by default, in the same datastore
location as the VM’s configuration file (.VMX) and virtual disk files (.VMDK)
(although you do have the option of relocating the VMkernel swap).
In the absence of a
memory reservation — the default configuration is — this file will be equal in
size to the amount of RAM configured for the VM. Thus, a VM configured for 4 GB
of RAM will have a VMkernel swap file that is also 4 GB in size and stored, by
default, in the same location as the VM’s configuration and virtual disk files.
In theory, this means a
VM could get its memory allocation entirely from Hypervisor's physical memory
or VMkernel swap ie. from disk. If VMkarnel swap memory is assigned then some
performance degradation for VM is obvious because disk access time is several
orders of magnitude slower than RAM access time.
Configured memory – memory reservation =Expected size of swap file (.vswp)
Do "Transparent
Page Sharing (TPS)" works for ‘Reserved Memory’- Yes
What about "Memory Ballooning"-No
Reserved memory can be
shared via transparent page sharing (TPS). Transparent page sharing does not
affect the availability of reserved memory because the shared page is still
accessible to the VM.
What is LIMIT? What are its
impact on guest OS-
It
sets the actual limit or upper ceiling on how much allocated
physical resource may be utilized by that VM.
The key problem with the
use of memory limits is that they are enforced by vmkernel without any guest OS
or VM awareness. The guest OS will continue to behave as if it can use
allocated maximum RAM, completely unaware of the limit that has been placed on
it by the hypervisor. Setting a memory limit will have a significant impact on
the performance of the VM because- as a result the guest OS will constantly be
forced to swap pages to disk (guest OS swapping, not hypervisor swapping).
Why use memory limit-
Use memory limits as a temporary
measure to reduce physical
memory usage in your ESXi hosts.
Perhaps you need to
perform troubleshooting on an ESXi host and You want to temporarily push down
memory usage on less-important VMs so that you don’t overcommit memory too
heavily and negatively impact lots of VMs. Limits would help in this situation.
In general, then, you
should consider memory limits as a temporary stop-gap measure when you need to reduce physical
memory usage on an ESXi host and a negative impact to performance is
acceptable.
CPU Utilization
Like shares, reservations, and
limits, what is the fourth option available for managing CPU utilization?
CPU
affinity. CPU
affinity allows an administrator to statically associate a VM to a specific physical CPU
core. CPU affinity is generally not recommended; it has a list of rather
significant drawbacks:
è CPU
affinity breaks vMotion.
è Because
vMotion is broken, you cannot use CPU affinities in a cluster where vSphere DRS
isn’t set to Manual operation.
è The
hypervisor is unable to load-balance the VM across all the processing cores in
the server. This prevents the hypervisor’s scheduling engine from making the
most efficient use of the host’s resources.
Remember:- We
use CPU Reservation, Limit and Share to control CPU clock cycle
allocation (Core speed).
What
is the difference between Memory Reservation and CPU Reservation-
A CPU Reservation is very
different than a Memory Reservation when it comes to “sharing” reserved CPU
cycles. Reserved Memory, once allocated to the VM, is never reclaimed, paged
out to disk, or shared in any way. The same is not true of CPU Reservations.
The ESXI host has two idle VMs
running. The shares are set at the defaults for the running VMs. Will the
Shares values have any effect in this scenario-?
No. There’s no
competition between VMs for CPU time because both are idle. Share comes in to
play in time of resource contention.
The
ESX host with dual, single-core, 3 GHz CPUs has two equally busy VMs running
(both requesting maximum CPU capacity). The shares are set at the defaults for
the running VMs. Will the Shares values have any effect in this scenario-?
No. Again, there’s no
competition between VMs for CPU time, this time because each VM is serviced by
a different core in the host.
Remember:-CPU
Affinity Not Available with Fully Automatic DRS enabled Clusters.
If you are using a
VSphere Distributed Resource Scheduler–enabled cluster configured in fully
automated mode, CPU affinity cannot be set for VMs in that cluster. You must
configure the cluster for manual or partially automated mode in order to use
CPU affinity.
Describe CPU Reservation, Limit and
Share?
è Reservations set
on CPU cycles provide guaranteed processing power for VMs. Unlike memory,
reserved CPU cycles can and will be used by ESXi to service other requests when
needed. As with memory, the ESXi host must have enough real, physical CPU
capacity to satisfy a reservation in order to power on a VM.
è Limits on
CPU usage simply prevent a VM from gaining access to additional CPU cycles even
if CPU cycles are available to use. Even if the host has plenty of CPU processing
power available to use, a VM with a CPU limit will not be permitted to use more
CPU cycles than specified in the limit. Depending on the guest OS and the
applications, this might or might not have an adverse effect on performance.
è Shares are
used to determine CPU clock cycle allocation when the ESXi host is experiencing
CPU contention. CPU shares grant CPU access on a percentage basis calculated on
the number of shares granted out of the total number of shares assigned.
What is Resource Pool? Why it is
required?
Resource pool basically
is a special type of container object, much like a folder, mainly used to group
VM's with similar resource allocation needs to avoid administrative overhead. Resource
pool uses reservations, limits, and shares to control and modify resource
allocation behavior, but only for memory and CPU.
What
is Expandable Reservation in resource Pool-
A Resource Pool provides
resources to its child objects. A child object can either be a virtual machine
or another resource pool. This is what called the parent-child relationship.
But what happens if the
child objects in the resource pool are configured with reservations that
exceeds the reservation set on the parent resource pool-- in that case parent
resource pool needs to request protected resources from its parent resource
pool. This can only be done if expandable reservation is enabled.
Please note that the resource pool request protected or reserved resources from its parent resource pool, it will not accept resources that are not protected by a reservation.
Please note that the resource pool request protected or reserved resources from its parent resource pool, it will not accept resources that are not protected by a reservation.
You want to a understand Resource
Pool's resource allocation, from where you can see allocation of resources to
objects within the vCenter Server hierarchy-
Clusters "Resource
Allocation" tab can
verify the allocation of resources to objects within the vCenter Server
hierarchy.
Remember:-Shares
Apply Only During Actual Resource Contention-
Remember
that share allocations come into play only when VMs are fighting one another
for a resource — in other words, when an ESXi host is actually unable to
satisfy all the requests for a particular resource. If an ESXi host is running
only eight VMs on top of two quad-core processors, there won’t be contention to
manage (assuming these VMs have only a single vCPU and Shares values won’t
apply.
What is Processor core? Thread?
what is Hyperthreading? what is Logical CPU and Virtual CPU-
Processor :
responsible of all processing operations. We define multiple processors in a
server by Socket.
Core :
one operations unit or processing unit Inside your physical processor.
Hyper-Threading:- Normally a processor
Core can handle one thread or one operation at a same time (time means
processor time slot). But with Hyper-Threading enabled a processor Core can
handle two threads at the same time.
Logical Processor :
The number of threads
that processor Cores can handle in a machine is the number of logical
processor. So if you want to know how much logical processor do you have, just
count the total number of threads a processor can handle.
How much logical processor do you have:-
Cores Count = Processor
Count X CoresCountPerProcessor
Logical Processor Count =
CoresCount X ThreadCount
so
No-of-Processor-(Socket) X Cores-Per-Processor X ThreadCount = Logical Processor Count
Virtual Processor: when you
create a virtual machine you do assign to it a processor. Like vRAM, VHD,
Virtual network interface, we can assign a Virtual Processor (VP) to a virtual
machine. Actually virtual Processor is nothing but the physical processor’s TimeSlot slice
that will be given to the virtual machine.
What
is SMP? Symmetric Multiprocessing: SMP is processing of a program by multiple
processors that share a common operating system and
memory.
What
are the maximum Virtual CPU
Limitations of VMware?
Maximum number of virtual
CPUs in a VM depends on the following:-
(a) Number of logical
CPUs on the host,
(b)The ESXi host license,
(c)Type of guest
operating system is installed on VM.
What is Network Resource Pool?
A 'network resource pool' is a resource pool which
control network utilization. A network resource pool can control outgoing
network traffic or outgoing network bandwidth with the help of shares and
limits. This feature is referred to as vSphere Network I/O Control (NetIOC).
Network resource pool or NetIOC’s disadvantage:-
(X)Controls outgoing
Traffic Only
(Y)Works only on a
Distributed Switch
What is System Network Resource Pool?
What is Custom Resource Pool-
When you enable vSphere
NetIOC, vSphere activates six predefined system network resource pools:
l Fault
Tolerance (FT) Traffic
l Virtual
Machine Traffic
l vMotion
Traffic
l Management
Traffic
l iSCSI
Traffic
l NFS
Traffic
Custom Resource Pool is
used to control customer’s resource utilization?
Remember:- You Can’t Map Port Groups to
System defined resource Pools
Port
groups can only be mapped to user-defined network resource pools, not system
network resource pools.
How do you enable NetIOC?
First, you must enable NetIOC on that
particular vDS. Second, you must
create and configure custom network resource pools as necessary.
What
are three basic settings each network resource pool consists of?
'Physical Adapter
Shares'
'Host Limit'
QoS Priority Tag
èPhysical
Adapter Shares'- priority for access to the physical
network adapters when there is network contention.
è 'Host
Limit'- upper limit on the amount of network traffic a
network resource pool is allowed to consume (in Mbps)
è QoS
Priority Tag- The QoS (Quality of Service) priority
tag is an 802.1p tag that is applied to all outgoing packets. Configured upstream
network switches can further enhance and enforce the QoS just beyond the ESXi
host.
What are the pre-requisites of
storage I/O control (SIOC)-
SIOC has a few
requirements you must meet:
(i)All SIOC enabled Datastores
should be under a single vCenter Server-
(j)No RDM Support, NO NFS
Support
(k) Datastore with
Multiple Extents Support not supported-
Remember:- check Array Auto-Tiring compatibility with SIOC
What is auto-tiering
— the ability for the array to seamlessly and transparently migrate data
between different storage tiers (SSD, FC, SAS, SATA).
How
do you Enabling Storage I/O
Control-
First, enable SIOC on one or more datastores.
Second, assign shares or limits to storage
I/O resources on individual VMs.
Remember these points about SIOC-
SIOC is enabled on a
per-datastore basis.
By default, SIOC is
disabled for a datastore, meaning that you have to explicitly enabled SIOC if
you want to take advantage of its functionality.
While SIOC is disabled by
default for individual datastores, it is enabled by default for Storage DRS–enabled
datastore clusters that have I/O metrics enabled for Storage DRS.
How
Storage I/O control SIOC works-
SIOC uses disk latency as
the threshold to enforce Shares values-
SIOC uses disk latency
as the threshold to determine when it should activate and enforce Shares values
for access to storage I/O resources. Specifically, when vSphere detects latency
is in excess of a specific threshold value (measured in milliseconds), SIOC is
activated.
For controlling the use
of storage I/O by VMs SIOC uses shares and limits-
SIOC provides two
mechanisms for controlling the use of storage I/O by VMs: shares and limits.
the Shares value establishes a relative priority as a ratio of the total
number of shares assigned, while the Limit value defines the upper ceiling on
the number of I/O operations per second (IOPS) that a given VM may generate.
To
guarantee certain levels of performance, your IT director believes that all VMs
must be configured with at least 8 GB of RAM. However, you know that many of
your applications rarely use this much memory. What might be an acceptable
compromise to help ensure performance?
One way would be to
configure the VMs with 8 GB of RAM and specify a reservation of only 2 GB. VMware
ESXi will guarantee that every VM will get 2 GB of RAM, including preventing
additional VMs from being powered on if there isn’t enough RAM to guarantee 2
GB of RAM to that new VM. However, the RAM greater than 2 GB is not guaranteed
and, if it is not being used, will be reclaimed by the host for use elsewhere.
If plenty of memory is available to the host, the ESXi host will grant what is
requested; otherwise, it will arbitrate the allocation of that memory according
to the shares values of the VMs.
A
fellow VMware administrator is a bit concerned about the use of CPU
reservations. She is worried that using CPU reservations will “strand” CPU
resources, preventing those reserved but unused resources from being used by
other VMs. Are this administrator’s concerns well founded?
For CPU reservations- No.
While it is true that VMware must have enough unreserved CPU capacity to
satisfy a CPU reservation when a VM is powered on, reserved CPU capacity is not
“locked” to a VM like memory reservation. If a VM has reserved but unused CPU
capacity, that capacity can and will be used by other VMs on the same host. The
other administrator’s concerns could be valid, however, for memory reservations.
Your
company runs both test/development workloads and production workloads on the
same hardware. How can you help ensure that test/development workloads do not
consume too many resources and impact the performance of production workloads?
Create a resource pool
and place all the test/development VMs in that resource pool. Configure the
resource pool to have a CPU limit and a lower CPU shares value. This ensures
that the test/development will never consume more CPU time than specified in
the limit and that, in times of CPU contention; the test/development
environment will have a lower priority on the CPU than production workloads.
Name
two limitations of Network I/O Control-
NIOC works only with
è(i)
vSphere Distributed Switches,
è(ii)The
ability to only control outbound network traffic
è(iii)
The fact that it requires vCenter Server in order to operate
è
(iv)”System network resource pools” cannot be assigned to user-created port
groups?
What
are the requirements for using Storage I/O Control-
èAll
datastores and ESXi hosts that will participate in Storage I/O Control must be
managed by the same vCenter Server instance.
èRaw
Device Mappings (RDMs) and NFS datastores are not supported for SIOC.
èDatastores
must have only a single extent; datastores with multiple extents are not
supported?
No comments:
Post a Comment