Considerations with 6G LSI Megaraid with Raid 0 & vSAN

Carl Liebich
6 min readJan 5, 2017

--

A while back I was tasked with configuring a cluster of Cisco UCS C2xx M3 servers for Vmware vSAN 6.0 U2. As the UCS C2xx M3 series are using the 9271CV-8 controller there are a few specific requirements for configuration that I thought I would document the process in this guide.

Although this article is very specific to UCS the StorCLI commands and methodology would be the same for pretty much all of the MegaRaid controllers listed as a Raid 0 HCL certified device.

1.Prerequisites

  • Raid controller is on the HCL List (In this deployment
    UCS-RAID9271CV-8I
    )
  • Firmware is matching the HCL (Time of writing 23.29.0–0014, link here)
  • Cache SSD drives are on the HCL
  • Data drives are on the HCL
  • Just as good practice update the firmware on the SSDs and HDDs is applicable.
  • Do not perform any raid configuration from the controller bios or CIMC. We will take care of this from the ESXi host side as its much faster to provision.
  • ESXi is installed and fully patched. Recommend you to check out the ESXi patch tracker to see if you are running the latest version (https://esxi-patches.v-front.de/ESXi-6.0.0.html)
  • SSH enabled on the host

2. Virtual Disk Creation

You will need to download the StorCLI utility so we can manage the raid controller from the ESXi shell. Go to LSI.. I mean Avargo… no wait now Broadcom’s website to download the latest StorCLI package.

2.1 Installing StorCLI

  • Download the latest StorCLI from Broadcom’s website. The zip file covers every OS.
  • Extract the zip file and copy the .vib located under the Vmware-NDS folder into the ESXi host.
  • Install the vib package
esxcli software vib install -v /tmp/vmware-esx-storcli-1.20.15.vib — no-sig-check
  • Note: Seems Broadcom/Avargo are not signing their vibs thus I had to use had bypass the signature signing check.
  • Once installed (no need to reboot) you can navigate to the /opt/lsi/storcli directory to find the executable. We will need to retrive the controller id by type in:
./storcli show all
Highlighted in red is the controller ID
  • Then we will need to list of the physical disk attached to the controller
./storcli /c0 show all
the “/c[Controller ID] show all” will give you a list of the physical disks with there Enclosure and Slot ID

2.2 Virtual Drive for Cache Drive

So for each disk we want to add to VSAN that is in the physical disk list (PD) we now need to create a Raid 0 virtual drive (VD). Below we will create our first Raid 0 for the “Cache SSD”

  • At the shell simple type the below command making sure the enclosure ID and slot ID matches the drive of your cache device (e.g. drive=252:0)
./storcli /c[Controller ID] add vd type=raid0 name=[Name of Virtual Disk] drive=[Enclosure ID:Slot ID] nora wt direct strip=256
Creating the first Raid 0

With the above command we are creating a new virtual disk using the disk from slot 0. We are also telling the virtual drive to not used any cache for both read and write operations.

The virtual drive will now be visible in vSphere it usually takes about 1 minute to appear. If not try do a rescan.

The disk will automatically be named by the ESXi host in this case “Local LSI Disk” followed by the disk ID.

Important: Read the next section before creating the second virtual drive

2.3 Renaming the disk in vSphere (Optional)

Using vSAN with “Raid 0" also add a layer of complexity from a operations perspective. As ESXi cannot see the physical disks, good documentation becomes critical when disks failures occur so you can quickly and easily identify and locate failed disks. When the disk is added to vSphere you will only see a generic name (e.g. Local LSI Disk) and the Disk ID but associating the ESXi disk ID to the virtual disk on the raid controller can be time consuming. Although this step is not required it does make things a lot more manageable from a ESXi perspective.

One approach to simplifying the identification of the disk in vSphere is to rename the local disk to the serial number of the physical drive. I also have appended the name of the virtual drive in my example but please using a naming convention that works for your environment.

  • We can retrieve the serial number of the physical drive by using StorCLI command below.
./storcli /c[Controller ID] /e[Enclosure ID] /s[Slot ID] show all | grep ‘SN =\|Model Number’
Retrieving the serial number of a physical disk
  • Then open the vSphere webclient and go to the ESXi Host -> Manage -> Storage -> Storage Devices.
  • Select the disk then click on the rename icon pictured below
  • In the format I used I simply replaced the “Local LSI Disk” with “[Virtual Drive Name]-[Drive Serial Number]
Retrieving the serial number and renaming the disk
  • Repeat this process every time you create a new virtual drive. Although time consuming you will be able to quickly identify which virtual drive and serial number is having issues from vSphere.

2.4 Virtual Drive for the Data Drive (HDD in Hybrid Mode)

  • The command for adding a magnetic drive to VSAN is slightly different. To add a magnetic disk to vSAN:
./storcli /c[Controller ID] add vd type=raid0 name=[Name of Virtual Disk] drive=[Enclosure ID:Slot ID] ra wt direct strip=256
  • Once completed follow the steps above to retrieve the serial number and rename the disk in vSphere.

As you can see the read cache settings are now set to “Read Ahead” (ra) to improve read performance of the magnetic disks but write cache is still disabled.

2.5 Set the Cache drive to SSD

vSphere with not be able to identify which drive is the SSD with the Raid encapsulation and all the virtual drives created will be defaulted to HDD. To change the cache drive to a SSD in vSphere follow the steps below:

  • Open the vSphere web client and go to the ESXi Host -> Manage -> Storage -> Storage Devices.
  • Select the disk that is your cache device and press the “[F]” icon then select “yes”.
Setting the cache drive as a SSD in vSphere

3. Conclusion

I guess after reading this you’re probably wondering how to do this at scale? I’m sure you can rename those disks by using esxcli but the truth of the matter is don’t, vSAN works best with pass-through controllers were ESXi has full visibility and control of the disk. If you planning a vSAN deployment on UCS try to purchase the M4 series as the 12G card supports JBOD mode.

Well that should be it! Your now ready to create a cluster, add your host, and enable vSAN in Hybrid mode. I hope this article was of use for your deployment.

Happy vSANing everyone!

--

--