Saturday, May 14, 2016

Introducing VMware 5.x

                                              Introducing VMware 5.x

vRAM- Portion of physical memory that is assigned for your VM virtually.

vRam Entitlement- Amount of licensed or permitted vRAM that can be assigned for your VMs.


vRam Entitlement Pooling- vRAM entitlements that aren’t being used by one Esxi host can be used on another Host, as long as the total vRam across the entire pool falls below the Licensed limit.


Which products are licensed features within the VMware vSphere suite?
Licensed features in the VMware vSphere suite are Virtual SMP, vMotion, Storage vMotion, vSphere DRS, Storage DRS vSphere HA,
 and vSphere FT.


Which two features of VMware Esxi and VMware vCenter Server together aim to reduce or eliminate downtime due to unplanned hardware failures?

vSphere HA and vSphere FT are designed to reduce (vSphere HA) and eliminate (vSphere FT) the downtime resulting from unplanned hardware failures.

Name three features that are supported only when using vCenter Server along with Esxi?

All of the following features are available only with vCenter Server: vSphere vMotion, Storage vMotion, vSphere DRS, Storage DRS, vSphere HA, vSphere FT, SIOC, and NetIOC.

Name two features that are supported without vCenter Server but with a licensed installation of Esxi?

Features that are supported by VMware ESXi without vCenter Server include core virtualization features like virtualized networking, virtualized storage, vSphere vSMP, and resource allocation controls.

How vSphere differs from other virtualization products?

VMware vSphere’s hypervisor, ESXi, uses a type 1 bare-metal hypervisor that handles I/O directly within the hypervisor itself. This means that a host operating system, like Windows or Linux, is not required in order for ESXi to function. Although other virtualization solutions are listed as “type 1 bare-metal hypervisors,” most other type 1 hypervisors on the market today require the presence of a “parent partition” or “dom0,” through which all VM I/O must travel.


What are the editions available for VMware vSphare-

VMware offers three editions of vSphere:
vSphere Standard Edition
vSphere Enterprise Edition
vSphere Enterprise Plus Edition

What are the editions available for VMware vCenter server-

vCenter Foundation:-
  It supports the management of up to three (3) vSphere hosts.

vCenter Essentials kits :-
Bundled with the vSphere Essentials kits

vCenter Standard:-
  It includes all functionality and does not have a preset limit on the number of vSphere hosts it can manage. Although normal sizing limits do apply. vCenter Orchestrator is only included in the Standard edition of vCenter Server.

VMware Architecture

VMware Architecture


VMkernel:- VMkernel is the core operat­ing system which provides means for running different processes on the  system, including management applications and agents as well as virtual machines. It also has control of all the hardware devices on the server, and manages resources for the applications.

What are the processes runs on VMkernel:-
DCUI, VMM, CIM, Different agents like HA agents

 DCUI- Direct Console User Interface- It is a BIOS like menu driven Esxi host con­figuration and management interface which is accessible through the console of the Esxi host. It is mainly used for initial Host configuration.

The virtual machine monitor (VMM)- It is the process that provides the execution environment for a virtual machine, as well as a helper process known as VMX.

The Common Information Model (CIM) system:- CIM is the interface that enables hardware-level management and monitoring from remote applications via a set of standard APIs.

What are user world processes-
The term “user world” refers to a set of process running above the VMkernel operating system.
This process are (i) HOSTD  (ii) VPXA (ii) VMware HA Agents (iv) SYSLOG deamon (v) NTP and SNMP


HOSTD:- It is the process that authenticates users and keeps track of which users and groups have which privileges. It also allows you to create and manage local users. The HOSTD process provides a programmatic interface to VMkernel and is used by direct VI Client connections as well as the VI API.


VPXA:-The vpxa process is the agent used to connect to VirtualCenter server. It runs as a special system user called vpxuser. It acts as the intermediary between the hostd agent and VCenter server.


What is the difference between VPXA, VPXD, HOSTD?
Hostd is demon or service runs in every ESXi host and performs major tasks like, VM power on, local user management etc. But when ESXi host joins in a vCenter server Vpxa agent gets activated and talk to Vpxd service which runs in vCenter server. So the conclusion is Vpxd talks to Vpxa and Vpxa talks to hostd, this is how Vpxd (vCenter demon) talks to hostd demon via Vpxa agents.


What initial tasks can be done by DCUI-
• Set administrative password
• Configure networking
• Perform simple network tests
• View logs
• Restart agents
• Restore defaults

Describe VMware System Image Design-


The ESXi system has two independent banks of memory, each of which stores a full system image, as a fail-safe for applying updates. When you upgrade the system, the new version is loaded into the inactive bank of memory, and the system is set to use the updated bank when it reboots. If any problem is detected during the boot process, the system automatically boots from the previously used bank of memory. You can also intervene manually at boot time to choose which image to use for that boot, so you can back out of an update if necessary.
At any given time, there are typically two versions of VI Client and two versions of VMware Tools in the store partition.

Friday, May 13, 2016

VMware Distributed Switch


              vSphare Distributed switch or vDS

vSphare Distributed switch or vDS:-vDS Spans multiple ESXI hosts in a cluster instead of each hosts having its own set of vSwitches...... Provides a centralized “network controlling mechanism” across all the ESXi hosts.........And reduces network complexity in clustered ESXi environments....... simplifies the addition of new hosts to an ESXi cluster environment with guaranteed consistency of network configuration across the cluster.

Features:-
(i) Inbound traffic shaping
(ii) VM’s network port block
(iii) Private VLANs
(iv) Load-based teaming system
(v) Datacenter-level network management
               (vi) Network vMotion
               (vii) vSphere switch APIs
               (viii) Per-port policy settings
               (ix) Individual Port’s state monitoring
               (x) Link Layer Discovery Protocol (LLDP)
 (xi) User-defined traffic paths for QOS
(xii) Monitoring NetFlow
(xiii) Port Mirroring

Inbound traffic shaping:- This port group setting that can throttle or control the aggregate bandwidth inbound to the switch. vSS has outbound traffic shaping only while vDS has both.

VM’s network port block:- We can block a Specific switch ports for a specified VMs use.

Private VLANs:- In essence, a PVLAN is a VLAN within a VLAN. PVLANs in your vSphere environment can be kept from seeing each other.

Load-based teaming:- This teaming system evaluate current load on each link and make frame forwarding decisions to balance load.

Datacenter-level network management:- A vDS is managed from the vCenter server as a single switch. This provides a centralized network control mechanism and guarantees consistency of network configuration across the entire ESXI host connected.




Network vMotion:- Because a port group that is on a vDS is actually connected to multiple hosts. While vMotion a VM can migrate from one host to another without changing ports and port group settings such as security, traffic shaping, and NIC teaming etc.

vSphere switch APIs:-Through this APIs third-party distributed switches such as the Cisco Nexus 1000v can be used as a, virtual appliance (VA).

Per-port policy settings:- Most of the configuration on a vDS is at the port group level, but it can be overridden at the individual port level giving tremendous flexibility.

Individual Port’s state monitoring:- Each port on vDS can be managed and monitored independently of all other ports helping quickly identify port issues.

Link Layer Discovery Protocol:- Similar to Cisco's, Cisco Discovery Protocol (CDP),Link Layer Discovery Protocol (LLDP) enables vDSs to discover other devices such as switches and routers that are directly connected (linked) to them.

User-defined traffic paths for QOS:- You can set up a quality of service (QoS) (of a sort), by defining the traffic paths by types of VMware traffic. In earlier versions of vDSs, you could define traffic as vMotion, Management, storage and others, but now you can define your own categories.

Monitoring NetFlow:- This enables you to easily monitor virtual network flows with the same tools that you use to monitor traffic flows in the physical network. Your vDS can forward virtual NetFlow information to a monitoring machine in your external network.

Port Mirroring:- Port mirroring sends a copy of -"packets to be sent"- to a monitoring station so that traffic flows can be monitored without the IPS/IDS (intrusion prevention and detection system)skewing the data.


dvUplink groups:-Each host keeps its own network configuration in a hidden switch that is created when you add a host to a vDS. dvUplink groups connect those hidden switches that are contained in your hosts to vDS and then from there to the physical world.




PVLAN or Private VLAN:-In essence, a PVLAN is a VLAN within a VLAN. The PVLANs in your vSphere network can be kept from seeing each other. In other words by using PVLANs, you can isolate hosts from seeing each other while keeping them on the same IP subnet

PVLANs are configured in pairs: the primary VLAN and any secondary VLANs. The primary VLAN is considered the downstream VLAN; that is, traffic to the host travels along the primary VLAN. The secondary VLAN is considered the upstream VLAN; that is, traffic from the host travels along the secondary VLAN. There are 3 types of PVLANS:-



1. Community: This a private VLAN used to create a separate network to be shared by more than one VM in the primary VLAN. VM's on community VLANs can communicate only to other VMs on the same community or to VMs on a promiscuous VLAN.

2. Isolated: This is a private VLAN used to create a separate network for one VM in your primary VLAN. It can be used to isolate a highly sensitive VM. If a VM is in an isolated VLAN, it will not communicate with any other VMs in other isolated VLANs or in other community VLANs. It can communicate only with promiscuous VLANs.

3. Promiscuous: VMs on this VLAN are reachable and can be reached by any VM in the same primary VLAN. In PVLAN parlance, a promiscuous port is allowed to send and receive Layer 2 frames to any other port in the VLAN. This type of port is typically reserved for the default gateway for an IP subnet — for example, a Layer 3 router.




Cisco Discovery Protocol (CDP):- a Cisco protocol for exchanging information between network devices.

Link Layer Discovery Protocol (LLDP):- An industry standardized form of CDP. Through LLDP, ESXi hosts participating in a dvSwitch can exchange discovery information to physical switches. Discovery information includes information on the physical NIC use and the vSwitch involved.

vDS versions available-4.0 ,4.1, 5.0, 5.1, 5.5, 6.0

vSphare license for dvSwitch- Enterprise Plus

Can a ESXi host use vSS and vDS together- Yes.. you can use vSS and vDS together. Even you can leave your VMkernel ports in standard switch while keeping your entire VMport groups on Distributed switch

Difference between vSS and vDS Trafic shaping-
With vSphere Standard Switches, you could apply traffic-shaping policies only to egress (outbound) traffic but with a Distributed switch, you can apply traffic-shaping policies to both ingress (inbound) and egress traffic.

Difference between vSS and vDS Load Balancing-
version 4.1 and version 5.0 vDS support a new load balancing type,
Route Based On Physical NIC Load. When this load-balancing policy is selected, ESXi checks the utilization of the uplinks every 30 seconds for congestion. In this case, congestion is defined as either transmit or receive traffic greater than 75 percent mean utilization over a 30-second period. If congestion is detected on an uplink, ESXi will dynamically reassign the VM to a different uplink.

vDS Total Ports and Available ports-

With vSphere Standard Switches, the VMkernel reserved eight ports (8) for its own use, creating a discrepancy between the total numbers of ports listed in different places.

 For every host added to a Distributed switch (vDS), four ports (4) by default are added to the “vDS Uplinks” port group which are reserved for uplinks. So, a vDS with three hosts would have 140 total ports with 128 available, a vDS with four hosts would have 144 total ports with 128 available, and so forth.

     vDS  Distributed Switch                vSS    Standard switch

2 Hosts =128+(4X2)=136           Maximum Port per vSwitch 4096
3 Hosts=128+(4X3)=140            Maximum Port per Host 4096-8
4 Hosts=128+(4X4)=144                                    =4088


Wednesday, May 11, 2016

Networking Standard Switch

Short definition:-

vSphere Standard Switch: A software-based switch that resides in the VMkernel and provides traffic management for VMs. Users must manage vSwitches independently on each ESXi host.

vSphere Distributed Switch:- A software-based switch that resides in the VMkernel and provides traffic management for VMs and ESXi hosts across the entire cluster. Distributed vSwitches are shared by and managed across entire clusters of ESXi hosts.

Port Groups:- A vSwitch allows several different types of communication. Port groups differentiate between the types of traffic passing through a vSwitch. It also operates as a boundary for communication types and security policy configuration.

VMkernel port group and VM port group:- A vSwitch allows several different types of communication, including communication to and from the VMkernel and between VMs.

VMkernel port group:- Communication to and from the VMkernel is supported by VMkernel port group.

VM port group:- Communication between VMs are supported by VM port group.

VMkernel Port:- A specialized virtual switch port type that is configured with an IP address to allow VMkernel traffic. VMkernel port allows vMotion, iSCSI storage access, network attached storage (NAS) or Network File System (NFS) access, or vSphere Fault Tolerance (FT) logging. In vSphere 5 environment a VMkernel port also provides management connectivity for managing the ESXi hosts.

Service Console Port:- VMware ESX use traditional Linux-based Service Console port for management traffic.

Spanning Tree Protocol (STP):- In physical switches, Spanning Tree Protocol (STP) offers redundancy for paths and prevents loops in the network topology by locking redundant paths in a standby state. Only when a path is no longer available, STP will activate the standby path.

Uplinks:- vSwitches must be connected to the ESXi host’s physical NICs as uplinks in order to communicate with the rest of the network.

Management network:- In order for the ESXi host to be reachable across the network, a VMkernel port is configured automatically....... this is called management network.

VGT (Virtual Guest Tagging) or VLan id 4095:-Normally the VLAN ID will range from 1 to 4094. In the ESXi environment, VLAN ID 4095 is valid...... Using this VLAN ID with ESXi causes the VLAN tagging information to be passed through the vSwitch all the way up to the guest OSs that support and understands VLAN tags.


VMkarnel Port:- VMkernel ports provide network access for the VMkernel’s TCP/IP stack. VMkernel networking carries not only management traffic, but also all other forms of traffic originating from the ESXi hosts like vMotion, iSCSI, NAS/NFS access, and vSphere FT logging.

NIC teaming:- The aggregation of physical network interface cards (NICs) to form a single logical communication channel. Different types of NIC teams provide varying levels of traffic load balancing and fault tolerance.

NIC teaming in vSphare:- The aggregation of physical network interface cards (NICs) to form a single logical communication channel. NIC teaming in ESXi hosts involves connecting multiple physical network adapters or uplinks to a single vSwitch. Different types of NIC teams provide varying levels of traffic load balancing and fault tolerance.

***** REMEMBAR:- NIC team Load Balancing is Outbound and connection oriented:-

1.The load-balancing feature of NIC teams on a vSwitch applies only to the outbound traffic.

2.The load-balancing feature of NIC teams on a vSwitch do not equalize data flow through all available adapters, instead it balances number of source to destination connections within team members.


vmxnet Adapter :-A virtualized network adapter operating inside a guest operating system (guest OS). The vmxnet adapter is a high-performance, 1 Gbps virtual network adapter that operates only if the VMware Tools have been installed. The vmxnet adapter is sometimes referred to as a paravirtualized driver. The vmxnet adapter is identified as Flexible in the VM properties.

Traffic Shaping:- if Uplinks or physical network cards bandwidth contention becomes a bottleneck hindering VM performance, then it is possible to enable and to configure traffic shaping for bandwidth control.

Traffic shaping involves the establishment of hard-coded limits for peak bandwidth, average bandwidth, and burst size to reduce a VM’s outbound bandwidth capability.

NIC Teaming Load balancing Policies:-

ð vSwitch port-based load balancing
ð Source MAC-based load balancing:-
ð IP hash-based load balancing:-
ð Explicit failover order:-

vSwitch port-based load balancing (default):-This policy assigns each virtual switch port to a specific uplink. Failover to another uplink occurs when one of the physical network adapters experiences failure.

Source MAC-based load balancing:-This policy works as the name suggests, ties a virtual network adapter to a physical network adapter based on the MAC address. Failover to another uplink occurs when one of the physical network adapters experiences failure.

IP hash-based load balancing:-This policy uses the source and destination IP addresses to calculate a hash. The hash determines the physical network adapter to use for communication. Different combinations of source and destination IP addresses will, quite naturally, produce different hashes. Based on the hash, then, this algorithm could allow a single VM to communicate over different physical network adapters when communicating with different destinations, assuming that the calculated hashes lead to the selection of a different physical NIC.

Explicit failover order:-The last option, explicit failover order, isn’t really a “load-balancing” policy; instead, it uses the user-specific failover order.

NIC teaming Failover detection method:-

ð The link status failover detection method
ð Beacon-Probing Failover detection method

The link status failover detection method:- Failure of an uplink is identified by the link status provided by the physical network adapter.
The downside to the link status failover-detection setting is its inability to identify miss configurations or pulled cables.


Beacon-Probing Failover detection method:- ESXi/ESX periodically broadcasts beacon packets from all uplinks in a team. The physical switch is expected to forward all packets to other ports on the same broadcast domain. Therefore, a team member is expected to see beacon packets from other team members. If an uplink fails to receive three consecutive beacon packets, it is marked as bad. The failure can be due to the immediate link or a downstream link.

Storage-VMware

VMFS: -                      
vSphere Virtual Machine File System (VMFS) is similar to NTFS for Windows Server and EXT3 for Linux. Like these files systems, it is native; it’s included with vSphere and operates on top of block storage objects. If you’re leveraging any form of block storage in vSphare, then you’re using VMFS. VMFS creates a shared storage pool that is used for one or more VMs.
Why it is different than NTFS and EXT3:-
Clustered File system-
Simple and transparent distributed locking mechanism-
Steady-state I/O with High throughput at a low CPU overhead-
Locking is handled using metadata in a hidden section of the file system-


VMFS metadata :- The metadata portion of the file system contains critical information in the form of on-disk lock structures (files), such as which ESXi host is the current owner of a given VM in the clustered file system, ensuring that there is no contention or corruption of the VM.

When VMFS metadata updates occurs?
The creation of a file in the VMFS datastore- (powering on a VM, creating/deleting a VM, or taking a snapshot, for example)

Changes to the structure VMFS file system itself- (extending the file system or adding a file system extent)

Actions that change the ESXi host that owns a VM- (vMotion and vSphere HA that cause VM files ownership change)

VMFS extents:-
VMFS can reside on one or more partitions called extents in vSphare-

VMFS metadata is always stored in the first extent-

In a single VMFS-3 datastore, 32 extents are supported for a maximum size of up to 64 TB-
Except the first extent, where VMFS metadata resides, Removing the LUN supporting a VMFS-3 extent will not make the spanned VMFS datastore unavailable-

Removing an extent affects only the portion of the datastore supported by that extent-

VMFS-3 and VMs are relatively resilient to a hard shutdown or crash-

VMFS-3 allocates the initial blocks for a new file (VMs) randomly in the file system (or Extents), and subsequent allocations for that file are sequential-

VMFS datastore that spans multiple extents across multiple LUNs reduces LUN queue depth-

VMFS block sizes?
** The VMFS block size determined the maximum file size on the file system-
** Once set, the VMFS-3 block size could not be changed-
** Only hosts running ESXi 5.0 or later support VMFS-5

Advantages of using VMFS-5 over VMFS-3:-
VMFS-5 offers a number of advantages over VMFS-3:-

Maximum 64 TB in size using only a single extent- Multiple extents are still also limited to 64 TB-

A single block size of 1 MB allowed that support a 2TB file creation-

More efficient sub-block allocation, only 8 kb compare to VMFS-3s 64 kb-

Not limited to 2 TB for the creation of physical-mode RDMs-

Non-disruptive in place and online upgrade to VMFS-5 from VMFS-3 datastores-

multipathing:-
Multipathing is the term used to describe how a host, such as an ESXi host, manages storage devices that have multiple ways (or paths) to access them. Multipathing is extremely common in Fiber Channel and FCoE environments and is also found in iSCSI environments. Multipathing for NFS is also available but handled much differently than for block storage.

Pluggable Storage Architecture (PSA):-
Elements of vSphere storage stack that deal with multipathing is called Pluggable Storage Architecture (PSA)-


What is LUN queues:-
Queues exist on the server (in this case the ESXi host), generally at both the HBA and LUN levels. Queues exist on the storage array also. . Block-centric storage arrays generally have these queues at the target ports, array-wide, and array LUN levels, and finally at the spindles themselves.

How to view the LUN queue:-
For determining outstanding items are in the queue Use resxtop, hit 'U' and go to storage screen for checking QUED column-

vSphare Storage API’s:-
vSphere Storage APIs for Array Integration    (VAAI)
vSphere Storage APIs for Storage Awareness (VASA)
vSphere Storage APIs for Site Recovery (VASR)
vSphere Storage APIs for Multipathing  (          )

vSphere Storage APIs for Data Protection      (VADP)
VAAI:- A means of offloading storage-related operations from the ESXi hosts to the storage array.
VASA:- enables more advanced out-of-band communication between storage arrays and the virtualization layer.
vSphere Storage APIs for Site Recovery (VASR):-
SRM (Site Recovery Manager) is an extension to VMware vCenter Server that delivers a disaster recovery solution. SRM can discover and manage replicated datastores, and automate migration of inventory between vCenter Server instances. VMware vCenter Site Recovery Manager (SRM) provides an API so third party software can control protection groups and test or initiate recovery plans.
vSphere Storage APIs for Multipathing :- third-party storage vendor's multipath software
vSphere Storage APIs for Data Protection  (VADP):- Enables third party backup software to perform centralized virtual machine backups without the disruption and overhead of running backup tasks from inside each virtual machine.

Profile driven storage:-
Working in conjunction with VASA, which facilitate communication between storage arrays and the virtualization layer, the principle behind profile-driven storage is simple: allow vSphere administrators to build VM storage profiles that describe the specific storage attributes that a VM requires. Then, based on that VM storage profile, allow vSphere administrators to place VMs on datastores that are compliant with that storage profile, thus ensuring that the needs of the VM are properly serviced by the underlying storage. 

Keep in mind that the bulk of the power of profile-driven storage comes from the interaction with VASA (Virtualization layer to array communication) to automatically gather storage capabilities from the underlying array. However, you might find it necessary or useful to define one or more additional storage capabilities that you can use in building your VM storage profiles.




Please remember:- Keep in mind that you can assign only one user-defined storage capability per datastore. The VASA provider can also only assign a single system-provided storage capability to each datastore. This means that datastores may have up to 2 capabilities assigned: one system-provided capability and one user-defined capability.

Upgrade from VMFS -3 to VMFS - 5:-
Yes it is possible to non disruptively upgrade fromVMFS -3 to VMFS -5.
VMFS - 5 to VMFS -3 downgrade:- No it is not possible to downgrade from VMFS -5 to VMFS -3

Note 1:- Upgraded VMFS-5 partitions will retain the partition characteristics of the original VMFS-3 datastore, including file block-size, sub-block size of 64K, etc.

Note 2:- Increasing the size of an upgraded VMFS datastore beyond 2TB changes the partition type from MBR to GPT. However, all other features/characteristics continue to remain same.


Disk signature:-
Every VMFS datastore has a Universally Unique Identifier (UUID) embedded in the filesystem. This Unique 'UUID" differentiate a datastore from others. When you clone or replicate a VMFS datastore, the copy of the datastore will be a byte-for-byte copy of the datastore, right down to the UUID. If you attempt to mount the LUN that has the copy of the VMFS data store, vSphere will see this as a duplicate copy and will require that you do one of two things:

Unmount the original and mount the copy with the same UUID.
Keep the original mounted and write a new disk signature to the copy.

Raw Device Mapping ( RDM):-
Normally VMs use shared storage pool mechanisms like VMFS or NFS datastores. But there are certain use cases where a storage device must be presented directly to the guest operating system inside a VM.

vSphere provides this functionality via a "Raw Device Mapping". RDMs are presented to your ESXi hosts and then via vCenter Server directly to a VM. Subsequent data I/O bypasses the VMFS and volume manager completely. I/O management is handled via a mapping file that is stored on a VMFS volume.
You can configure RDMs in two different modes: pRDM and vRDM.

Physical Compatibility Mode (pRDM):- In this mode, all I/O passes directly through to the underlying LUN device, and the mapping file is used solely for locking and vSphere management tasks. You might also see this referred to as a pass-through disk.

Virtual Mode (vRDM):- In this mode, all I/O travels through the VMFS layer.

Advantages and disadvantages of pRDM and vRDM?:-
1) pRDMs do not support vSphere snapshot: 
2) pRDMs do not covert to virtual disk through SvMotion:
3) pRDMs can easily be moved to a physical host:

RDM use cases:-
A) Windows clusters services:
B) When using Storage array's application-integrated snapshot tools:
C) When virtual configurations are not supported for a software. Software such as Oracle is an example:
Difference between VMFS datastore and NFS datastore:-
1.NFS file system is not managed or controlled by ESXi host:-
2.All file system issues (like HA performance tuning) are part of vSphare  networking stack not Storage Stack:-

Remember:- NFS datastores support only thin provisioning unless the NFS server supports the 'VAAIv2 NAS extensions' and vSphere's VMFS has been configured with the vendor-supplied plug-in.

Virtual SCSI adapter available for VMware:-
1. BusLogic Parallel- Well supported for older guest OSes.... doesn’t perform well..... various Linux flavors use it.
2. LSI Logic Parallel- well supported by newer guest OSes..... default for Windows Server 2003.
3. LSI Logic SAS- guest OS's are phasing out support for parallel SCSI in favor of SAS....... default SCSI adapter suggested for VMs running Windows Server 2008 and 2008 R2,
4.VMware Paravirtual- Paravirtualized devices (and their corresponding drivers) are specifically optimized to communicate more directly with the underlying VM Monitor (VMM)..... They deliver higher throughput and lower latency, and they usually make lower CPU impact for the I/O operations but at the cost of guest OS compatibility.
VM storage profile:-
VM storage profiles are a key component of profile-driven storage. By leveraging system-provided storage capabilities supplied by a VASA provider (which is provided by the storage vendor), as well as user-defined storage capabilities, you can build VM storage profiles that help shape and control how VMs are allocated to storage.
Keep in mind that a datastore may have, at most, two capabilities assigned: one system-provided capability and one user-defined capability.
Other than Raw Device Mapping what is the way to present storage devices directly to a VM:-
Other than RDM You also have the option of using an in-guest iSCSI initiator to bypass the hypervisor and access storage directly. Keep in mind the following tips about in-guest iSCSI initiator:-
Separate datastore needed other than VMFS or NFS:-
More physical NICs needed in your server.... for defining redundant connections and multipathing separately... as you are bypassing hypervisor level:-
No storage vMotion:-
No vSphare Snapshots:-

iSCSI:-
Ideal for customer those who are just getting started with vSphere and have no existing Fiber Channel SAN infrastructure.

Fibre Channel:-
Ideal for customer those who have VMs with high-bandwidth (200 MBps+) requirements (not in aggregate but individually).

NFS:-
Ideal for customer where there are many VMs with a low-bandwidth requirement individually (and in aggregate)...and those who have less than a single link’s worth of bandwidth.

vSphere storage models:-
vSphere has three fundamental storage presentation models:
(1) VMFS on block storage
(2) RDM and in-guest iSCSI initiator
(3) NFS.
The most flexible configurations use all three, predominantly via a shared-container model and selective use of RDMs.
Basic Storage Performance parameters:-

MBps:- Data transfer speed per second in megabytes.

IOps / throughput:- Maximum number of reads and writes (input and output operation) per seconds.

Disk latency:- Time it takes for the selected sector to be positioned under the read/write head.

How to quickly estimate storage configuration:-




Bandwidth:- For NFS it is the NIC link speed.
            For FC it is the HBA speed.
            For iSCSI it is the NIC speed.          
IOps:- In all cases, the 'throughput' (IOps) is primarily a function of the number of spindles (HDD) [ assuming no cache benefit and no RAID loss].
A quick rule of thumb is that:-
 The total number of IOps = IOps × the number of spindle.


Latency;- Latency is generally measured in milliseconds.....though can get to tens of milliseconds in situations where the storage array is overtaxed.