Chapter 6 - VM Storage Policies and Virtual Machine Provisioning

This chapter looks at some sample virtual machine (VM) provisioning workflows. You have already learned the various vSAN capabilities that you can add to a VM storage policy and that VMs deployed on a vSAN datastore can use. This chapter covers how to create the appropriate VM storage policy using these capabilities, and also discusses the layout of these VM storage objects as they are deployed on the vSAN datastore.

Policy Setting: Number of Failures to Tolerate=1

Let’s begin by creating a very simple VM storage policy. Then we can examine what will happen if a VM is deployed to a vSAN datastore using this policy. Let’s create the first policy to have a single capability setting of number of failures to tolerate set to 1. We are going to use RAID-1 mirroring to implement failures to tolerate initially. Later on we shall look at RAID-5 and RAID-6 configuration for the VM objects. This means that any VMs deployed on the vSAN datastore with this policy will be configured with an additional mirror copy (replica) of the data so that if there is a single failure in the vSAN cluster, a full complement of the vSAN storage objects is still available. Let’s see this in action, but before we do, let’s visualize the expected results as shown in Figure 6.1.

Figure 6.1 - vSAN I/O flow: Number of failures to tolerate set to 1

In this vSAN environment, there are a number of ESXi hosts. This is a hybrid configuration, where each ESXi host has a single disk group with a single solid-state disk (SSD) and a single magnetic disk. The vSAN cluster has been enabled, and the ESXi hosts have formed a single vSAN datastore. To this datastore, we will deploy a new VM, as demonstrated in Figure 6.2.

Let’s start the process by revisiting the creation of a VM storage policy. This procedure was discussed in significant detail in Chapter 4, “VM Storage Policies on vSAN,” where you also learned the various capabilities that you could use for VMs deployed on the vSAN datastore. As you might recall from Chapter 4, the eight capabilities that can be present in a VM storage policy are as follows:

  • Number of failures to tolerate
  • Number of disk stripes per object
  • IOPS limit for object
  • Disable object checksum
  • Failure tolerance method
  • Flash read cache reservation (hybrid configurations only)
  • Object space reservation
  • Force provisioning

We will keep this first VM storage policy simple, with just a single capability, number of failures to tolerate set to 1.

To begin, click the icon in the VM storage policies page in the vSphere Web Client to create a new policy. This will open the create new VM storage policy screen, as shown in Figure 6.2.

Figure 6.2 - Create a VM storage policy

The next screen displays information about rule-sets. Rule-sets are a way of grouping multiple rules together. In this way, VMs can be deployed on different datastores, depending on which selection criteria are satisfied. For the purposes of this exercise, we are creating only a single rule-set. The wizards display additional information about rule-sets, as shown in Figure 6.3.

Figure 6.3 - Rule-sets

On the next screen, we can begin to add our rule-set for vSAN. First is to change the vendor from none to vSAN, as shown in Figure 6.4. This will add an additional item to the wizard, namely the <Add rule> drop-down. If you click <Add rule>, the list of capabilities supported by vSAN is shown.

Figure 6.4 - vSAN capabilities

For our first policy, the capability that we want to add is number of failures to tolerate, and we will set this to 1, as shown in Figure 6.5.

Figure 6.5 - Number of failures to tolerate set to 1

There are a number of other features on this part of the wizard, namely add tag-based rules and add another rule set buttons. These are beyond the scope of this book, but you can find additional information in the official vSphere documentation. One additional point to make is the “storage consumption model” shown in the right hand side of the wizard. This gives administrators a good idea on how much space will be consumed depending on the requirements placed in the policy. For example, using a RAID-1 configuration to tolerate one failure will mean that there are two copies of the data created. Therefore a 100 GB VMDK would consume 200 GB of space on the vSAN datastore, as highlighted in the storage consumption model.

Clicking Next moves the wizard on to the matching resources window, and at this point the vSAN datastore should be displayed, as shown in Figure 6.6. This means that the contents of the VM storage policy (i.e., the capabilities) are understood by the vSAN datastore.

Figure 6.6 - Compatible storage resources

Note that in the initial release of vSAN, just because the vSAN datastore is shown in the compatible storage window, it does not mean that the vSAN datastore can provision VMs. It could be that the policy contains an unrealistic stripe width or failures to tolerate (FTT) setting that cannot be met by the vSAN cluster. This screen simply means that vSAN understands the policy contents. This is an important distinction. This was addressed in later versions of vSAN, where the vSAN Datastore would also appear as incompatible if the cluster configuration could not meet the contents of the policy.

Review your policy and click Finish to create it. Congratulations! You have created your first VM storage policy. We will now go ahead and deploy a new VM using this policy. The process for deploying a new VM is exactly the same as before. The only difference is at the storage-selection step. In the first release of vSAN, by default, no VM storage policy is selected; it is set to none, as shown in Figure 6.7.

Figure 6.7 - No policy selected in vSAN 5.5

This behavior was changed in vSAN 6.0 with the introduction of a default policy for the vSAN datastore, called the vSAN default storage policy. The rule-set for this default policy is number of failures to tolerate set to 1 and number of disk stripes per object set to 1. Now when a VM is created, and the vSAN datastore is selected, a policy called the datastore default is selected. For vSAN datastores, this is the vSAN default storage policy, as shown in Figure 6.8:

Figure 6.8 - Datastore default policy selected in vSAN 6.0

However, when our new VM storage policy (My first policy) is selected, you can see that the vSAN datastore is compatible, as shown in Figure 6,9.

Figure 6.9 - My first policy is selected, and the vSAN datastore is compatible

An important point to note here is that in initial release of vSAN, this compatibility check was just like the “matching resources” section of the create new VM storage policy wizard, which simply means that the vSAN datastore understands the contents of the policy. It does not mean that the vSAN cluster can meet the requirements; this will only be known when the VM is actually deployed. Once again, improvements made in vSAN 6.0 now do validation checks to ensure that the vSAN cluster can meet the requirements in the policy. This includes verifying that there are enough hosts in the cluster to meet the number of failures to tolerate requirement and verifying that there are enough capacity devices to meet the number of disk stripes per object.

Once the VM has been deployed, we have the ability to check the layout of the VM’s objects. Navigate to the VM view by clicking Monitor, and then selecting Policies, as shown in Figure 6.10. From here, using the physical disk placement tab, we can see the layout of the VM storage object’s VM home namespace, and VM disk files (VMDKs). The VM home namespace is where the .vmx file and other configuration files required by a VM reside. These storage objects that make up a VM on the vSAN datastore are discussed in detail in Chapter 5, “Architectural Details.”

Figure 6.10 - Compliance status is compliant

This view is in a different location in the original versions of vSAN. On earlier versions, navigate to the VM view, then click Manage, and then select VM Storage Policies.

As you can see, both objects are compliant. In other words, they meet the capabilities defined in the VM storage policy. This means that this VM can tolerate a failure in the vSAN cluster and still have a full complement of the storage objects available. If we now select the physical disk placement tab for either of the objects (VM home or hard disk), we can see that there is a RAID-1 (mirror) configuration around the components. See Figure 6.11.

Figure 6.11 - Physical disk placement

Policy Setting: Failures to Tolerate=1, Stripe Width=2

Let’s try another VM storage policy setting that adds another capability. In this case, we will use a cluster with more resources than the first example to facilitate the additional requirements. This time we will explicitly request a number of failures to tolerate set to 1 and a number of disk stripes per object set to 2. Let’s build out that VM storage policy and deploy a VM with that policy and see how it affects the layout of the various VM storage objects. In this scenario, we expect a RAID-1 configuration mirrored by a RAID-0 stripe configuration, resulting in four disk components. There are two components in each RAID-0 stripe, which is in turn mirrored in a RAID-1 configuration. Figure 6.12 shows how this will look from a logical perspective.

Figure 6.12 - vSAN I/O flow: striping, two hosts

Now, let’s create the VM storage policy and then provision a VM and see whether the result matches theory.

When creating the new policy, the vendor vSAN is once again selected to display specific capabilities of vSAN in the rule-sets as shown in Figure 6.13. To meet the necessary VM requirements, we select number of disk stripes per object and set this to 2, and we set number of failures to tolerate to 1. The number of disk stripes defined is a minimum number, so depending on the size of the virtual disk and the size of the capacity tier devices, a virtual disk might end up being striped across multiple disks or hosts.

Figure 6.13 - VM storage policy with failures to tolerate = 1 and stripe width = 2

Now that we have created a new VM storage policy, let’s take a look at the VM provisioning workflow, beginning with Figure 6.14.

Figure 6.14 - The vSAN datastore is compatible for policy ftt=1,sw=2.

In this example, we explicitly select the newly created policy called ftt=1,sw=2. Now you see that the available datastores are split into two distinct categories:

  • Compatible
  • Incompatible

As you can see, after selecting the newly created VM storage policy, only the vSAN datastore is compatible because it is the only one that can understand the capabilities that were placed in the VM storage policy. The other datastore (in this case, it is local VMFS but it could also be SAN based VMFS or NFS) do not understand the policy requirements and so are placed in the incompatible category, though these can still be selected should you want to do so. If you do choose an incompatible datastore, you will be alerted to the fact that the datastore does not match the given VM storage policy, and the policy will be shown as not applicable.

After we have deployed the VM, we will examine the physical disk layout again, as shown in Figure 6.15.

Figure 6.15 - Physical disk placement for policy of ftt=1,sw=2

As you can see in Figure 6.15, a RAID-1 configuration has been created, adhering to the number of failures to tolerate requirement specified in the VM storage policy. However, now you see that additionally each replica is made up of a RAID-0 stripe configuration, and each stripe contains two components, adhering to the number of disk stripes per object requirement of 2.

We also have a witness component created. Now it is important to point out that the number of witness components is directly related to how the components are distributed across the hosts and disks in the cluster. Depending on the size of the vSAN cluster, a number of additional witness components might have been necessary to ensure that greater than 50% of the components of this VM’s objects remained available in the event of a failure, especially a host failure. In the case of a four-node vSAN cluster, because the components are spread out across unique ESXi hosts, it is sufficient to create a single witness disk and keep greater than 50% of the components available when there is a failure in the cluster.

Note that witness components were the only way to configure for a quorum in the initial release of vSAN. Since vSAN 6.0, a new quorum mechanism is available which relies on each component having a vote. This means that there may be occasions when there are no witness components required, and that quorum can be achieved via component votes alone.

An interesting point to note is that the VM home namespace does not implement the number of disk stripes per object requirement. The VM home namespace only implements the number of failures to tolerate requirement. Therefore, if the VM home namespace is examined, we see that the components are not in a RAID-0 configuration, as shown in Figure 6.16.

Figure 6.16 - The VM home namespace does not implement stripe width capability

Policy Setting: Failures to Tolerate=2, Stripe Width=2

In this next example, we create another VM storage policy that has the number of disk stripes per object set to 2 and the number of failures to tolerate also set to 2. This implies that any VM deployed with this policy on the vSAN cluster should be able to tolerate up to two different failures, be they host, network, or disk failures. Considering the “two-host failure” capability specified and the number of disk stripes of 2, the expected disk layout is as shown in Figure 6.17.

Figure 6.17 - vSAN I/O flow: Tolerate two failures and stripe width set to 2

There are a few considerations with regards to this configuration. Because we are continuing with a RAID-1 mirroring configuration to tolerate failures, there needs to be n+1 copies of the data and 2n+1 hosts in the cluster to tolerate n failures. Therefore to tolerate two failures, there will be three copies of the data, and there must be a minimum of five hosts in the cluster.

First, the policy is created with the desired requirements, as shown in Figure 6.18.

Figure 6.18 - Failures to tolerate = 2, stripe width = 2

Next we deploy a new VM with this new policy, and as expected the vSAN datastore is the only one that shows up as compatible when the VM storage policy ftt=2,sw=2 is selected, as shown in Figure 6.19.

Figure 6.19 - The vSAN datastore is compatible with the ftt=2,sw=2 policy

Now that we have provisioned a VM, the physical disk placement can be examined to see how the VM storage objects have been laid out across hosts and disks.

First, let’s look at the VMDK or hard disk 1 of this VM, as shown in Figure 6.20.

Figure 6.20 - Physical disk placement for hard disk using the ftt=2,sw=2 policy

One thing to note is that the location of the physical disk placement has changed between releases. In the original release of vSAN, administrators navigated to the Manage > VM Storage Policies as shown here. In later releases, this view was changed to Monitor > VM Storage Policies. You will see screenshot from the different vSAN versions used throughout this book.

Now we see that for the virtual disk of this VM, vSAN has implemented an additional RAID-0 stripe configuration. For RAID-0 stripe configurations, all components in at least one of the RAID-0 stripe configuration must remain intact. That is why a third RAID-0 stripe configuration has been created. You might assume that if the first component in the first RAID-0 stripe configuration was lost, and the second component of the second RAID-0 stripe configuration was lost, vSAN might be able to use the remaining components to keep the storage object intact. This is not the case. Therefore, to tolerate two failures in the cluster, a third RAID-0 stripe configuration is necessary because two failures might take out the other two RAID-0 stripe configurations. This is also why all of these RAID-0 configurations are mirrored in a RAID-1 configuration. The bottom line with this policy setting is that any two hosts are allowed to fail in the cluster, and the VM’s data remains accessible. As you can see in Figure 6.19, components are stored on six different ESXi hosts in this eight-node vSAN cluster: mia-cg07.esx11, mia-cg07.esx13, mia-cg07.esx14, mia-cg07.esx15, mia-cg07.esx16, and mia-cg07-esx018.

Next, let’s look at the VM home namespace, as shown in Figure 6.21.

Figure 6.21 - Physical disk placement for VM home with the ftt=2,sw=2 policy

Previously, it was stated that the VM home namespace does not implement the number of disk stripes per object policy setting, but that it does implement the number of failures to tolerate. There is no RAID-0 configuration, but we can now see that there are three replicas in the RAID-1 mirror configuration to meet the number of failures to tolerate set to 2 in the VM storage policy. What can also be observed here is an increase in the number of witness disks. Remember that greater than 50% of the components of the VM home namespace object (or 50% of the votes depending on the quorum mechanism used) must be available for this object to remain online. Therefore, if two replicas were lost, there would still be one replica (i.e., copy of the VM home namespace data) available and two witness disks; therefore, greater than 50% of the components would still be available if two failures took out two replicas of this configuration.

Policy Setting: Failures to Tolerate=1, Object Space Reservation=50%

This next scenario explores a different capability. As explained in previous chapters, all objects deployed on vSAN are thinly provisioned by default. This means that they initially consume no disk space, but grow over time as the guest OS running inside of the VM requires additional space. Using the object space reservation policy setting in the VM storage policy, however, a VM can be deployed with a certain percentage of its disk space reserved in advance. By default, object space reservation is 0%, which is why VMs deployed on the vSAN datastore are thin. If you want to have all the space reserved for a VM (similar to a traditional “thick” disk), you can do this by setting the object space reservation to be 100%. We will go for somewhere in between. Please note, as highlighted already in this book, that object space reservation can only be set to 0% or 100% if deduplication and compression space efficiency techniques are enabled on the vSAN datastore.

Let’s start with an example that reserves 50% of the disk space at VM deployment time, as shown in Figure 6.22. The percentage value refers to the size of the VMDK. If the VMDK is 100 GB at deployment time, the amount of space reserved with an object space reservation value of 50% should reserve 50 GB of disk space. However, as per the storage consumption model, number of failures to tolerate = 1 is also implied in any policy. Therefore the initially reserved storage space is 100 GB.

Figure 6.22 - Object space reservation

Once the policy has been created, the VM may be deployed with the correct policy chosen, as shown in Figure 6.23.

Figure 6.23 - The vSAN datastore understands the object space reservation requirement

The vSAN datastore understands the policy setting and is shown as compatible, whereas the other datastore is marked as incompatible. Now, as already pointed out, we did not select a requirement of number of failures to tolerate. To reiterate, the number of failures to tolerate setting of 1 is always inferred, even if it is not explicitly specified. Therefore a number of failures to tolerate setting of 1 is implemented if a policy does not specify this requirement. To confirm, the physical disk placement views can be used to check whether a RAID-1 configuration is indeed in place. Only if FTT is explicitly set to 0 in the policy will you not have a RAID-1 configuration.

First, we verify the RAID-1 configuration in the VM home namespace view, as shown in Figure 6.24.

Figure 6.24 - VM Home: The number of failures to tolerate is inferred even if not specified in the policy

We can also confirm that the hard disk also has a mirrored configuration to meet a number of failures to tolerate policy setting even though it was not explicitly placed in the policy, as shown in Figure 6.25.

Figure 6.25 - VMDK: The number of failures to tolerate is inferred even if not specified in the policy

However, let’s return to the initial additional requirement we specified. That requirement was to reserve 50% of the disk space required by our VM. To see how much space the VMDK is consuming, navigate to Datastore > Manage > Files using the vSphere web client. The VM was initially deployed with a 40 GB VMDK, and now we have requested an object space reservation of 50%, as shown in Figure 6.26.

Figure 6.26 - Object space reservation = 50% reserves 20 GB out of 40 GB.

As stated, we can see that the 40 GB VM hard disk file deployed with this VM has reserved 20 GB of disk space, equal to the 50% that was requested in the VM storage policy for this VM.

Policy Setting: Failures to Tolerate=1, Object Space Reservation=100%

Let’s look at one last policy with object space reservation. This is to reserve the full 100% of our VMDK. The same steps are followed as before, which is to create a policy that contains an object space reservation requirement, but this time the value is 100% rather than 50%, as shown in Figure 6.27. This means, as you might have already guessed, that we reserve all of the VM’s disk space up front, similar to a thick format VM disk file.

As per the steps that have already been covered in the previous section, the policy requires only an object space reservation setting, but this time set to 100%. As before, a number of failures to tolerate setting of 1 is implied, even though it isn’t explicitly stated in the policy.

Figure 6.27 - Object space reservation = 100%

Note now that the storage consumption model shows that the amount of space that will be initially reserved with this policy is the same as the storage space, for example, 200 GB. When the policy is created, we can once again select this policy during VM deployment. We can verify that the vSAN datastore is compatible with our policy selection, as shown in Figure 6.28.

Figure 6.28 - The vSAN datastore is compatible with the VM storage policy

The next step is to determine how much disk space the VMDK file is actually reserving. Once again, you can use the Datastore > Manage > Files view to see how much space has been reserved up front for this VMDK, as shown in Figure 6.29. Because the object space reservation value was set to 100%, we should expect the full 40 GB to be reserved.

Figure 6.29 - 40 GB of disk space is reserved

As expected, the full 40 GB of disk space is reserved up front.

Policy Setting: RAID-5

Let us now look at a new policy introduced in vSAN 6.2, namely RAID-5. To configure a RAID-5 object, select the policy setting “failure tolerance method” and set this to RAID5/6, as shown in Figure 6.30. Note the storage consumption model. For a 100 GB VMDK, this only consumes 133.33 GB, which is 33% above the actual size of the VMDK. Previously when we created a policy for RAID-1 objects, because of the mirror copies, an additional 100% of capacity was consumed.

Since number of failures to tolerate equal to 1 is inferred without us having to add it to the policy, the configuration will roll out a RAID-5. If number of failures to tolerate was set to 2, then a RAID-6 configuration for the objects in implemented. We have added the failures to tolerate policy setting to the example in the Figure 6.30 just for demonstrational purposes.

Figure 6.30 - RAID-5 policy setup

If a VM is now deployed with this policy, the physical disk placement can be examined as before, and we should now observe a RAID-5 layout across four disks and four hosts. Figure 6.31 shows the physical disk placement view, and as described, we see a RAID-5 configuration for the object.

Figure 6.31 - RAID-5 object as seen from physical disk placement view

Note that the VM home namespace object also inherits a RAID-5 configuration.

Policy Setting: RAID-6

Besides RAID-5, with 6.2 the option to tolerate two failures in a capacity efficient manner has also been introduced and is called RAID-6. To configure a RAID-6 object, select the policy setting “failure tolerance method” and set this to RAID5/6 and set the policy setting “failures to tolerate” to 2, as shown in Figure 6.32. Note the storage consumption model. For a 100 GB VMDK, this only consumes 150 GB, which is 50% above the actual size of the VMDK. Previously when we created a policy for RAID-1 objects with failures to tolerate set to 2, because of the mirror copies, an additional 200% of capacity was consumed. This means that a 100 GB VMDK required 300 GB of disk capacity at the backend.

Figure 6.32 - RAID-6 policy setup

If a VM is now deployed with this policy, the physical disk placement can be examined as before, and we should now observe a RAID-6 layout across six disks and six hosts. Figure 6.33 shows the physical disk placement view, and as described, we see a RAID-6 configuration for the object.

Figure 6.33 - RAID-6 object as seen from physical disk placement view

Note that the VM home namespace object also inherits a RAID-6 configuration.

Policy Setting: RAID-5/6 and Stripe Width=2

It should be noted that using RAID-5/6 does not prevent the use of a stripe width in the policy. All this means is that each part of a RAID-5/6 object will be striped with two components in a RAID-0 configuration. In this next example, a VM was deployed with a RAID-5 configuration as per the previous example, but the policy includes a number of disk stripes per object set to 2. This led to the following object configuration in the physical disk placement view, as shown in Figure 6.34:

Figure 6.34 - RAID-5 object with stripe width =2 as seen from physical disk placement view

Note however that the VM home namespace does not implement stripe width, so it continues to have a RAID-5 configuration as per the previous example. Although in the above example we showed how this works with a RAID-5 configuration, the exact same principles apply to a RAID-6 configuration.

Default Policy

As you might imagine, vSAN has a default policy. This means that if no policy is chosen for a VM deployed on the vSAN datastore (VM storage policy left set to none, as per Figure 6.35), a default policy is used for the VM.

The default policy contains the following capabilities:

  • Number of failures to tolerate = 1
  • Number of disk stripes per object = 1
  • Flash read cache reservation = 0%
  • Object space reservation = not used
  • Force provisioning = disabled

Figure 6.35 - No policy selected results in the default policy being used

When the VM is deployed, in the initial vSAN release, the VM storage policy is set to none, as shown in Figure 6.35. However, you will also notice that the number of failures to tolerate value of 1 is implemented when the objects are examined via the physical disk placement. As you can see in this figure, even though the policy is set to none, a RAID-1 configuration is in place for the VM objects. This means that even if you deploy a VM without a policy, vSAN will automatically provide availability via the default policy.

As shown in Figure 6.36, which is once again taken from the first vSAN release, there is no VM storage policy associated with this VM. However, if we look at the VM home, we can see that it is automatically deployed with a RAID-1 configuration, which is a mirror copy of the data. The two components that make up the RAID-1 mirror are placed on two different ESXi hosts, namely mia-cg07.esx17 and mia-cg07-esx015. This means that if one of the ESXi hosts fails, there is still a full copy of the data available. Another point to make is related to the witness disk. This is placed on a different ESXi host (mia-cg07-esx013) from the data components, too. This is to ensure that greater than 50% of the components remain available should a host failure occur on any single ESXi hosts in the cluster. The witness acts as a vote in this configuration. It allows the VM home object to remain available in the event of a failure in the cluster.

Figure 6.36 - Number of failures to tolerate = 1 is part of the default policy

Let’s take a look at the hard disk/VMDK next. This is what the VMDK looks like when deployed to the vSAN datastore with a default policy. Because there is only a single VMDK on this VM, it is referred to as hard disk 1 in the user interface in this example. As Figure 6.37 shows, the two components that make up the RAID-1 mirror are placed on two different ESXi hosts, namely mia-cg07.esx12 and mia-cg07-esx013. The witness is placed on ESXi host mia-cg07-esx016.

As expected, all components, including the witness, are stored on different hosts to ensure availability.

Figure 6.37 - Hard disk 1 layout using default policy

One final note about the default policy in the initial version of vSAN is the object space reservation setting. This is not included in the default policy (but is included in later releases). Instead, the create VM wizard settings of thick and thin disks are implemented. If no changes are made to the create VM wizard settings during deployment, a lazy zeroed thick (LZT) VMDK will be deployed on the vSAN datastore, similar to having an object space reservation of 100%. This has been a common question in the early days of vSAN in so far as customers did not understand why vSAN was deploying objects that were “thick” and not thin. This is the reason. Although the default policy exists in the initial vSAN release, VMware recommends that administrators create their own policy and not rely on the default policy for deployments. VMware also cautions against editing the default policy in the initial release of vSAN; the recommendation is that if you need to change the default policy, it is much simpler to build a policy via VM storage policies in vCenter to meet those requirements. Also note that editing the default policy can only be done at the command line of the ESXi host in the first release of vSAN, and this editing process needs to be repeated on every ESXi host in the cluster. This can lead to user errors, and so should be avoided if at all possible.

This all became so much easier in vSAN 6.0. In vSAN 6.0, there is now a default policy for the vSAN datastore, called the vSAN default storage policy. If you wish to change the default policy, you can simply edit the capability values of the policy from the vSphere Web Client. The default policy includes the same set capabilities as before, namely tolerate 1 failure, set stripe width to 1, no read cache reservation, no object space reservation and do not force provision. The failure tolerance method is not specified in the default policy, meaning that it will default to RAID-1 (Mirroring), when using an all-flash vSAN configuration it may be desired to change the policy as such.

Now when deploying a virtual machine, once the vSAN datastore is selected, the VM storage policy is set to datastore default (whereas in vSAN 1.0 it was set to none). You no longer need to explicitly pick a policy. For the vSAN datastore, the datastore default policy is the vSAN default storage policy.

And if you are managing multiple vSAN deployments with a single vCenter Server, different default policies can be associated with different vSAN datastores. Therefore, if you have a “test&dev” cluster and a “production” cluster, there can be different default policies associated with the different vSAN datastores.

Summary

This completes the coverage of VM storage policy creation and VM deployments on the vSAN datastore. One policy setting that we did not include in this chapter was the flash read cache reservation. This is simply because this setting is not visible from a VM layout perspective in the vSphere Web Client. However, it is configured as a percentage value once again, in exactly the same way as object space reservation is configured, as a percentage value of the full VMDK size. For example, if 1% is the flash read cache reservation setting on a 40 GB VMDK, this will reserve 400 MB of flash read cache for that particular VM. As stated, however, there is no way to observe this reservation in the vSphere Web Client. Chapter 10, “Troubleshooting, Monitoring, and Performance,” will show how an administrator can use the Ruby vSphere Console (RVC) to examine flash read cache reservation values.

The other policy setting that was not discussed in this chapter was the force provisioning vSphere Replication setting. Again, this is not something that can be readily observed in the vSphere Web Client when it is set in a policy and that policy is used for a VM deployment. If force provisioning is used to deploy a VM, the VM will be deployed on the vSAN datastore as long as one full set of storage objects can be deployed. (The behavior of objects that are deployed with force provision is discussed in more detail in chapter 4.) So even though the policy may contain requirements such as failures to tolerate, or stripe width, or flash read cache reservation, the VM may be deployed without any of these configurations in place when force provisioning is specified. However, the VM will be shown as out of compliance in the vSphere Web Client. When the additional resources become available, this VM will be reconfigured using the additional resources to bring it to compliance. vSAN automatically enforces the policy once the resources are available; no steps are required by the administrator to initiate this process. You should be aware by now that deploying VMs that do not have their requirements met via the use of force provisioning can be dangerous and may result in an unavailable VM should a failure occur in the cluster.

What you will have noticed is that there are a few behaviors with VM storage policies that might not be intuitive, such as the default policy settings, the fact that a number of failures to tolerate set to 1 is implicitly included in a policy, and that some virtual storage objects implement only some of the policy settings. Chapter 5 explained these nuances in greater detail.

results matching ""

    No results matching ""