VMFS:
-
vSphere Virtual Machine File System (VMFS) is
similar to NTFS for Windows Server and EXT3 for Linux. Like these files
systems, it is native; it’s included with vSphere and operates on top of block
storage objects. If you’re leveraging any form of block storage in vSphare,
then you’re using VMFS. VMFS creates a shared storage pool that is used for one
or more VMs.
Why
it is different than NTFS and EXT3:-
Clustered File system-
Simple and transparent distributed locking mechanism-
Steady-state I/O with High throughput at a low CPU overhead-
Locking is handled using metadata in a hidden section of the file
system-
VMFS
metadata :- The metadata portion of
the file system contains critical information in the form of on-disk lock
structures (files), such as which ESXi host is the current owner of a given VM
in the clustered file system, ensuring that there is no contention or
corruption of the VM.
When
VMFS metadata updates occurs?
The creation of a file in the VMFS datastore- (powering on a VM, creating/deleting
a VM, or taking a snapshot, for example)
Changes to the structure VMFS file system itself- (extending the file system or
adding a file system extent)
Actions that change the ESXi host that owns a VM- (vMotion and vSphere HA that cause VM files ownership change)
VMFS extents:-
VMFS
can reside on one or more partitions called extents in vSphare-
VMFS metadata
is always stored in the first extent-
In a single
VMFS-3 datastore, 32 extents are supported for a maximum size of up to 64 TB-
Except
the first extent, where VMFS metadata resides, Removing the LUN supporting a VMFS-3
extent will not make the spanned VMFS datastore unavailable-
Removing an extent affects only
the portion of the datastore supported by that extent-
VMFS-3 and VMs are relatively
resilient to a hard shutdown or crash-
VMFS-3 allocates the initial
blocks for a new file (VMs) randomly in the file system (or Extents), and
subsequent allocations for that file are sequential-
VMFS datastore that spans multiple
extents across multiple LUNs reduces LUN queue depth-
VMFS
block sizes?
** The VMFS block size determined the maximum
file size on the file system-
** Once set, the VMFS-3 block size could not be
changed-
** Only hosts running ESXi 5.0 or later support
VMFS-5
Advantages
of using VMFS-5 over VMFS-3:-
VMFS-5 offers a number of advantages over
VMFS-3:-
Maximum 64 TB in size using only
a single extent- Multiple extents are still also limited to 64 TB-
A single block size of 1 MB
allowed that support a 2TB file creation-
More efficient sub-block
allocation, only 8 kb compare to VMFS-3s 64 kb-
Not
limited to 2 TB for
the creation of physical-mode RDMs-
Non-disruptive in place and
online upgrade to VMFS-5 from VMFS-3 datastores-
multipathing:-
Multipathing is the term used to describe how
a host, such as an ESXi host, manages storage devices that have multiple ways
(or paths) to access them. Multipathing is extremely common in Fiber Channel
and FCoE environments and is also found in iSCSI environments. Multipathing for
NFS is also available but handled much differently than for block storage.
Pluggable Storage
Architecture (PSA):-
Elements of vSphere storage stack that deal with
multipathing is called Pluggable Storage Architecture (PSA)-
What is LUN queues:-
Queues exist on the server (in this case the ESXi
host), generally at both the HBA and LUN levels. Queues exist on the storage
array also. . Block-centric storage arrays generally have these queues at the
target ports, array-wide, and array LUN levels, and finally at the spindles
themselves.
How
to view the LUN queue:-
For determining outstanding items are in the
queue Use resxtop, hit 'U' and go to storage screen for
checking QUED column-
vSphare
Storage API’s:-
vSphere
Storage APIs for Array Integration
(VAAI)
vSphere
Storage APIs for Storage Awareness (VASA)
vSphere
Storage APIs for Site Recovery (VASR)
vSphere Storage APIs for Multipathing (
)
vSphere
Storage APIs for Data Protection
(VADP)
VAAI:- A means of offloading
storage-related operations from the ESXi hosts to the storage array.
VASA:- enables more advanced
out-of-band communication between storage arrays and the virtualization layer.
vSphere
Storage APIs for Site Recovery (VASR):-
SRM (Site Recovery Manager) is an extension to
VMware vCenter Server that delivers a disaster recovery solution. SRM can
discover and manage replicated datastores, and automate migration of inventory
between vCenter Server instances. VMware vCenter Site Recovery Manager
(SRM) provides an API so third party software can control protection groups and
test or initiate recovery plans.
vSphere
Storage APIs for Multipathing :-
third-party storage vendor's multipath software
vSphere Storage APIs for Data
Protection (VADP):- Enables third party backup software
to perform centralized virtual machine backups without the disruption and
overhead of running backup tasks from inside each virtual machine.
Profile driven storage:-
Working in conjunction with VASA, which
facilitate communication between storage arrays and the virtualization layer,
the principle behind profile-driven storage is simple: allow vSphere
administrators to build VM storage profiles that describe the specific storage attributes that a VM
requires. Then, based on that VM storage profile, allow vSphere administrators
to place VMs on datastores that are compliant
with that storage profile, thus ensuring that the needs of the VM are properly
serviced by the underlying storage.
Keep in mind that the bulk of the power of profile-driven
storage comes from the interaction with VASA (Virtualization layer to array
communication) to automatically gather storage capabilities from the underlying
array. However, you might find it necessary or useful to define one or more
additional storage capabilities that you can use in building your VM storage
profiles.
Please remember:- Keep in mind
that you can assign only one user-defined storage capability per datastore. The
VASA provider can also only assign a single system-provided storage capability
to each datastore. This means that datastores may have up to 2 capabilities
assigned: one system-provided capability and one user-defined capability.
Upgrade from VMFS -3 to
VMFS - 5:-
Yes it is possible to non disruptively upgrade fromVMFS -3 to VMFS
-5.
VMFS
- 5 to VMFS -3 downgrade:- No it is not possible to downgrade from VMFS -5 to VMFS
-3
Note 1:- Upgraded VMFS-5 partitions will retain the partition
characteristics of the original VMFS-3 datastore, including file block-size,
sub-block size of 64K, etc.
Note 2:-
Increasing the size of an upgraded VMFS datastore beyond 2TB changes the
partition type from MBR to GPT. However, all other features/characteristics
continue to remain same.
Disk signature:-
Every VMFS datastore has a Universally Unique
Identifier (UUID) embedded in the filesystem. This Unique 'UUID"
differentiate a datastore from others. When you clone or replicate a VMFS
datastore, the copy of the datastore will be a byte-for-byte copy of the
datastore, right down to the UUID. If you attempt to mount the LUN that has the
copy of the VMFS data store, vSphere will see this as a duplicate copy and will
require that you do one of two things:
Unmount the
original and mount the copy with the same UUID.
Keep the original
mounted and write a new disk signature to the copy.
Raw
Device Mapping ( RDM):-
Normally VMs use shared storage pool mechanisms like
VMFS or NFS datastores. But there are certain use cases where a storage device
must be presented directly to the guest operating system inside a VM.
vSphere provides this functionality via a
"Raw Device Mapping". RDMs are presented to your ESXi hosts and then
via vCenter Server directly to a VM. Subsequent data I/O bypasses the VMFS and
volume manager completely. I/O management is handled via a mapping file that is
stored on a VMFS volume.
You can configure RDMs in two different modes:
pRDM and vRDM.
Physical
Compatibility Mode (pRDM):- In this mode, all I/O passes directly through to the underlying LUN
device, and the mapping file is used solely for locking and vSphere management
tasks. You might also see this referred to as a pass-through disk.
Virtual Mode
(vRDM):- In this mode, all I/O travels
through the VMFS layer.
Advantages
and disadvantages of pRDM and vRDM?:-
1) pRDMs do not support vSphere snapshot:
2) pRDMs do not covert to virtual disk through
SvMotion:
3) pRDMs can easily be moved to a physical host:
RDM use cases:-
A) Windows
clusters services:
B) When using
Storage array's application-integrated snapshot tools:
C) When virtual
configurations are not supported for a software. Software such as Oracle is an
example:
Difference
between VMFS datastore and NFS datastore:-
1.NFS
file system is not managed or controlled by ESXi host:-
2.All
file system issues (like HA performance tuning) are part of vSphare networking stack not Storage Stack:-
Remember:- NFS
datastores support only thin provisioning unless the NFS server supports the
'VAAIv2 NAS extensions' and vSphere's VMFS has been configured with the
vendor-supplied plug-in.
Virtual SCSI adapter available
for VMware:-
1. BusLogic
Parallel- Well supported for older guest OSes.... doesn’t perform
well..... various Linux flavors use it.
2. LSI Logic
Parallel- well supported by newer guest OSes..... default for
Windows Server 2003.
3. LSI Logic SAS-
guest OS's are phasing out support for parallel SCSI in favor of SAS....... default
SCSI adapter suggested for VMs running Windows Server 2008 and 2008 R2,
4.VMware
Paravirtual- Paravirtualized devices (and their corresponding
drivers) are specifically optimized to communicate more directly with the
underlying VM Monitor (VMM)..... They deliver higher throughput and lower
latency, and they usually make lower CPU impact for the I/O operations but at
the cost of guest OS compatibility.
VM storage profile:-
VM storage profiles are a key component of
profile-driven storage. By leveraging system-provided storage capabilities
supplied by a VASA provider (which is provided by the storage vendor), as well
as user-defined storage capabilities, you can build VM storage profiles that
help shape and control how VMs are allocated to storage.
Keep in mind
that a datastore may have, at most, two capabilities assigned: one
system-provided capability and one user-defined capability.
Other
than Raw Device Mapping what is the way to present storage devices directly to
a VM:-
Other than
RDM You also have the option of
using an in-guest iSCSI initiator to
bypass the hypervisor and access storage directly. Keep in mind the following
tips about in-guest iSCSI initiator:-
Separate datastore needed other than VMFS
or NFS:-
More physical NICs needed in your server.... for
defining redundant connections and multipathing separately... as you are
bypassing hypervisor level:-
No storage vMotion:-
No vSphare Snapshots:-
iSCSI:-
Ideal for customer those who are just getting
started with vSphere and have no existing Fiber Channel SAN infrastructure.
Fibre
Channel:-
Ideal for customer those who have VMs with
high-bandwidth (200 MBps+) requirements (not in aggregate but individually).
NFS:-
Ideal for customer where there are many VMs with
a low-bandwidth requirement individually (and in aggregate)...and those who
have less than a single link’s worth of bandwidth.
vSphere
storage models:-
vSphere has three fundamental storage
presentation models:
(1) VMFS on block storage
(2) RDM and in-guest iSCSI initiator
(3) NFS.
The most flexible configurations use all three,
predominantly via a shared-container model and selective use of RDMs.
Basic
Storage Performance parameters:-
MBps:- Data transfer speed per second in megabytes.
IOps / throughput:- Maximum number of reads and writes (input and output operation) per
seconds.
Disk latency:- Time it takes for the selected
sector to be positioned under the read/write head.
How
to quickly estimate storage configuration:-
Bandwidth:- For NFS it is the NIC link
speed.
For FC it is the HBA speed.
For iSCSI it is the NIC speed.
IOps:-
In all cases, the 'throughput' (IOps) is primarily a function of the number of
spindles (HDD) [ assuming no cache benefit and no RAID loss].
A quick rule of thumb is that:-
The total
number of IOps = IOps × the number of spindle.
Latency;- Latency is generally measured
in milliseconds.....though can get to tens of milliseconds in situations where
the storage array is overtaxed.
These information were very easy to understand. They were very useful for my business. Keep up the good work.
ReplyDeleteCloud Migration Services
AWS Cloud Migration Services
Azure Cloud Migration Services
VMware Cloud Migration Services
Cloud Migration tool
Database Migration Services
Cloud Migration Services