HPE to acquire Nimble Storage

Nimble Storage Signs Agreement to Be Acquired by Hewlett Packard Enterprise Company, to accelerate the global adoption of Nimble Storage’s Innovative Predictive Analytics and leading next-generation flash storage platform

By Suresh Vasudevan
CEO, Nimble Storage

Today is a significant milestone in our company history, as we have signed a definitive agreement to be acquired by Hewlett Packard Enterprise Company (“HPE”).

When we were founded in 2007, our vision was that we could leverage flash and cloud-based predictive analytics to eliminate infrastructure constraints and accelerate applications.  We believed that we could build a thriving customer community through products that exceeded expectations, integrity in every customer interaction, and by delivering an unmatched support experience. Perhaps most significantly, our aspiration was to bring together an incredibly talented pool of people under one roof working collaboratively as a team to take on audacious goals.

As we look back, we are proud of our accomplishments.  We have built a customer base of 10,000 customers in under 7 years of shipping products.  “Six 9s” of measured availability and predictive support driven by InfoSight have resulted in an extremely happy customer base, as reflected by our Net Promoter Score (NPS) of 85.  Numerous industry awards reflect the long list of innovations and “industry firsts” as part of our Predictive Cloud Platform. We were named a Leader in Gartner’s Magic Quadrant for General Purpose Storage Arrays for the second consecutive year.

As proud as we are of what we have accomplished, we face a challenge of scale and significant exposure as a standalone storage company. Our aspiration has always been to be an innovation leader, and see our technology deployed in organizations around the globe.  But, as we weighed the opportunities and risks, we concluded that an acquisition makes sense at the right price with the right partner. We believe we’ve found both.

Through numerous discussions with HPE’s leadership, it became clear that HPE is a great partner. We believe that the combination of Nimble Storage and HPE creates an industry leader in the fast growing flash storage market, with predictive analytics providing an unmatched operational and support experience.

  • Predictive Analytics. We are excited about extending the power of InfoSight beyond the Nimble Storage product line, to span the entire storage portfolio of the combined company.
  • Global distribution reach. With HPE’s massive scale and global distribution, we see an opportunity to increase our customer base by an order of magnitude within a few years. Our enterprise business outpaced our overall growth last year, driven by the superior value proposition of our platform.  With HPE’s large enterprise relationships, we anticipate further acceleration in this segment.
  • Continued storage innovation. We have an exciting roadmap that goes well beyond flash arrays and our recently announced Nimble Cloud Volumes. The ability to invest behind that roadmap is significantly strengthened by HPE’s scale and financial strength.
  • Continued commitment to customer support. Our reputation for support has been key to our success, and the combined company remains committed to that support experience. The expansion of InfoSight Predictive Analytics adoption, along with a maniacal focus on customer success, will preserve the current proactive support experience.

I would personally like to thank our more than 10,000 devoted customers, strategic alliance partners, and channel ecosystem partners – all of whom have rallied behind our technology and differentiated support experience.  As we look ahead, we are confident that by combining Nimble Storage’s technology leadership with HPE’s global distribution strength, strong brand, and enterprise relationships, we’re creating expansion opportunities for the combined company.

We remain steadfastly focused on our mission of enabling applications to perform without disruption.

Additional Information and Where to Find It

The tender offer for the outstanding shares of Nimble Storage common stock (the “Offer”) has not yet commenced.   This communication  is for informational purposes only and is neither an offer to purchase nor a solicitation of an offer to sell shares, nor is it a substitute for the tender offer materials that Hewlett Packard Enterprise Company (“HPE”) and a subsidiary of HPE (“Merger Sub”) will file with the U.S. Securities and Exchange Commission (the “SEC”).   At the time the tender offer is commenced, HPE and Merger Sub will file tender offer materials on Schedule TO, and thereafter Nimble Storage will file a Solicitation/Recommendation Statement on Schedule 14D-9, with the SEC with respect to the Offer.   THE TENDER OFFER MATERIALS (INCLUDING AN OFFER TO PURCHASE, A RELATED LETTER OF TRANSMITTAL AND CERTAIN OTHER TENDER OFFER DOCUMENTS) AND THE SOLICITATION/RECOMMENDATION STATEMENT WILL CONTAIN IMPORTANT INFORMATION.   HOLDERS OF SHARES OF NIMBLE STORAGE COMMON STOCK ARE URGED TO READ THESE DOCUMENTS CAREFULLY WHEN THEY BECOME AVAILABLE (AS EACH MAY BE AMENDED OR SUPPLEMENTED FROM TIME TO TIME) BECAUSE THEY WILL CONTAIN IMPORTANT INFORMATION THAT HOLDERS OF SHARES OF NIMBLE STORAGE COMMON STOCK SHOULD CONSIDER BEFORE MAKING ANY DECISION REGARDING TENDERING THEIR SHARES.   The Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, will be made available to all holders of shares of Nimble Storage’s common stock at no expense to them.  The tender offer materials and the Solicitation/Recommendation Statement will be made available for free at the SEC’s website at http://www.sec.gov.   Additional copies of the tender offer materials may be obtained for free by directing a written request to Nimble Storage, Inc., 211 River Oaks Parkway, San Jose, California 95134, Attn: Investor Relations, or by telephone at (408) 514-3475.

In addition to the Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, HPE and Nimble Storage file annual, quarterly and current reports and other information with the SEC.  You may read and copy any reports or other information filed by HPE or Nimble Storage at the SEC public reference room at 100 F Street, N.E., Washington, D.C. 20549.  Please call the SEC at 1-800-SEC-0330 for further information on the public reference room.  HPE’s and Nimble Storage’s filings with the SEC are also available to the public from commercial document-retrieval services and at the SEC’s website at http://www.sec.gov.

Forward-Looking Statements

This document contains forward-looking statements within the meaning of the safe harbor provisions of the Private Securities Litigation Reform Act of 1995.  Such statements involve risks, uncertainties and assumptions.  If such risks or uncertainties materialize or such assumptions prove incorrect, the results of Nimble Storage and its consolidated subsidiaries could differ materially from those expressed or implied by such forward-looking statements and assumptions.  All statements other than statements of historical fact are statements that could be deemed forward-looking statements, including any statements regarding the expected benefits and costs of the Offer, the merger and the other transactions contemplated by the definitive agreement relating to the acquisition of Nimble Storage by HPE; the expected timing of the completion of the Offer and the merger; the ability of HPE, Merger Sub and Nimble Storage to complete the Offer and the merger considering the various conditions to the Offer and the merger, some of which are outside the parties’ control, including those conditions related to regulatory approvals; any statements of expectation or belief; and any statements of assumptions underlying any of the foregoing.  Risks, uncertainties and assumptions include the possibility that expected benefits may not materialize as expected; that the Offer and the merger may not be timely completed, if at all; that, prior to the completion of the transaction, Nimble Storage’s business may not perform as expected due to transaction-related uncertainty or other factors; that the parties are unable to successfully implement integration strategies; and other risks that are described in Nimble Storage’s SEC reports, including but not limited to the risks described in Nimble Storage’s Annual Report on Form 10-K for its fiscal year ended January 31, 2016.  Nimble Storage assumes no obligation and does not intend to update these forward-looking statements.

Advertisements

Nimble PowerShell Toolkit

I was working on an internal project to test performance of a converged system solution.  The storage component is a Nimble AF7000 from which we’re presenting a number of LUNs.  There are almost 30 LUNs and I’ve had to create, delete and provision them a number of times throughout the project.  It became extremely tedious to do this through the WebUI so I decided to see if it could be scripted.

I know you can log into the nimble via ssh and basically do what I’m trying to do- and I did test this with success.  However I’ve recently had a customer who wanted to use PowerShell to perform some daily snapshot/clone operations for Oracle database running on windows (don’t ask).  We decided to leverage the Nimble PowerShell Toolkit to perform the operations right from the windows server.  The script was fairly straightforward, although we had to learn a little about PowerShell syntax and such.  I’ve included a sanitized script below that basically does what I need to.

$arrayname = "IP address or FQDN of array management address"
$nm_uid = "admin"
$nm_password = ConvertTo-SecureString -String "admin" -AsPlainText -Force
$nm_cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $nm_uid,$nm_password
$initiatorID = Get-NSInitiatorGroup -name {name of initiator group} | select -expandproperty id

# Import Nimble Tool Kit for PowerShell
import-module NimblePowerShellToolKit

# Connect to the array
Connect-NSGroup -group $arrayname -credential $nm_cred

# Create 10 DATA Disks
for ($i=1; $i -le 10; $i++) {
    New-NSVolume -Name DATADISK$i -Size 1048576 -PerfPolicy_id 036462b75de9a4f69600000000000000000000000e -online $true
    $volumeID = Get-NSVolume -name DATADISK$i | select -expandproperty id
    New-NSAccessControlRecord -initiator_group_id $initiatorID -vol_id $volumeID
}

# Create 10 RECO Disks
for ($i=1; $i -le 10; $i++) {
    New-NSVolume -Name RECODISK$i -Size 1048576 -PerfPolicy_id 036462b75de9a4f69600000000000000000000000e -online $true
    $volumeID = Get-NSVolume -name RECODISK$i | select -expandproperty id
    New-NSAccessControlRecord -initiator_group_id $initiatorID -vol_id $volumeID
}

# Create 3 GRID Disks
for ($i=1; $i -le 3; $i++) {
    New-NSVolume -Name GRIDDISK$i -Size 2048 -PerfPolicy_id 036462b75de9a4f69600000000000000000000000e -online $true
    $volumeID = Get-NSVolume -name GRIDDISK$i | select -expandproperty id
    New-NSAccessControlRecord -initiator_group_id $initiatorID -vol_id $volumeID
}

I also wrote a script to delete the LUNs below:

$arrayname = "IP address or FQDN of array management address"  
$nm_uid = "admin"
$nm_password = ConvertTo-SecureString -String "admin" -AsPlainText -Force
$nm_cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $nm_uid,$nm_password
$initiatorID = Get-NSInitiatorGroup -name {name of initiator group} | select -expandproperty id

# Import Nimble Tool Kit for PowerShell
import-module NimblePowerShellToolKit

# Connect to the array 
Connect-NSGroup -group $arrayname -credential $nm_cred


# Delete 10 DATA Disks
for ($i=1; $i -le 10; $i++) {
    Set-NSVolume -name DATADISK$i -online $false
    Remove-NSVolume -name DATADISK$i
}

# Delete 10 RECO Disks
for ($i=1; $i -le 10; $i++) {
    Set-NSVolume -name RECODISK$i -online $false
    Remove-NSVolume -name RECODISK$i 
}

# Delete 3 GRID Disks
for ($i=1; $i -le 3; $i++) {
    Set-NSVolume -name GRIDDISK$i -online $false
    Remove-NSVolume -name GRIDDISK$i 
}

Obviously you’ll have to substitute some of the values such as $arrayname, $nm_uid, $nm_password and $initiatorID (make sure you remove the {}’s when you put your value here). This is a very insecure method of storing your password but it was a quick and dirty solution at the time. There are ways to store the value of a password from a highly secured text file and encrypt it into a variable. Or if you don’t mind being interactive, you can skip providing the credentials and it will pop up a password dialog box for you to enter them every time the script runs.

It made the project go a lot faster- hopefully you can use this to model different scripts to do other things. The entire command set of the Nimble array is basically exposed through the toolkit so there’s not a whole lot you can’t do here that you could in the WebUI. When you download the toolkit- there is a README PDF that goes through all the commands. When in PowerShell, you can also get help for each of the commands. For example:

PS C:\Users\esteed> help New-NSVolume

NAME
    New-NSvolume

SYNOPSIS
    Create operation is used to create or clone a volume. Creating volumes requires name and size attributes. Cloning
    volumes requires clone, name and base_snap_id attributes where clone is set to true. Newly created volume will not
    have any access control records, they can be added to the volume by create operation on access_control_records
    object set. Cloned volume inherits access control records from the parent volume.


SYNTAX
    New-NSvolume [-name] <String> [-size] <UInt64> [[-description] <String>] [[-perfpolicy_id] <String>] [[-reserve]
    <UInt64>] [[-warn_level] <UInt64>] [[-limit] <UInt64>] [[-snap_reserve] <UInt64>] [[-snap_warn_level] <UInt64>]
    [[-snap_limit] <UInt64>] [[-online] <Boolean>] [[-multi_initiator] <Boolean>] [[-pool_id] <String>] [[-read_only]
    <Boolean>] [[-block_size] <UInt64>] [[-clone] <Boolean>] [[-base_snap_id] <String>] [[-agent_type] <String>]
    [[-dest_pool_id] <String>] [[-cache_pinned] <Boolean>] [[-encryption_cipher] <String>] [<CommonParameters>]


DESCRIPTION
    Create operation is used to create or clone a volume. Creating volumes requires name and size attributes. Cloning
    volumes requires clone, name and base_snap_id attributes where clone is set to true. Newly created volume will not
    have any access control records, they can be added to the volume by create operation on access_control_records
    object set. Cloned volume inherits access control records from the parent volume.


RELATED LINKS

REMARKS
    To see the examples, type: "get-help New-NSvolume -examples".
    For more information, type: "get-help New-NSvolume -detailed".
    For technical information, type: "get-help New-NSvolume -full".

You can also use the -detail parameter at the end to get a more complete description of each option. Additionally you can use -examples to see the commands used in real world situations. Have fun!

Using VVOLs with vSphere 6 and Nimble

VMware Virtual Volumes is a concept that represents a major paradigm shift from the way storage is used in VMware today.

Below is a short 5 minute video that explains the basic concept of VVOLs.

 

Additionally, knowing the difference between communicating with LUNs as in the old world and communicating with PEs (Protocol Endpoints) is crucial to understanding what VVOLs brings to the table and why.

In short, PE’s are actually a special LUN on the storage array that the ESXi server uses to communicate with the array.  It’s not a LUN in the traditional sense, but more like a logical gateway to talk to the array.  I would say in some ways it’s similar in function to a gatekeeper LUN on an EMC array.  That LUN in turn maps to multiple sub-luns that make up the VM’s individual storage related components (vmdk, vswp, vmsd, vmsn etc).  When the host wants to talk to a LUN, it sends the request to the address of the PE “LUN” with an offset address of the actual LUN on the storage array.  Two things immediately came to mind once I understood this concept:

  1. Since all communication related to the sub-volumes is a VASA function, what happens when vCenter craps the bed?
  2. If I only have 1 PE, isn’t that going to be a huge bottleneck for storage I/O?

The answers to these and other questions are handily dealt with in a post here by VMware vExpert Paul Meehan.  Again- the short version is that vCenter is not needed after the host boots and gets information on PE’s and address offsets.  When it IS needed however is during a host boot.  Secondly, I/O traffic actually goes through the individual volumes, not the PE.  Remember, the PE is a logical LUN that serves as a map to the actual volumes underneath.

This brings me to the next video- understanding PEs.  This link starts about 12 minutes into an hour long presentation where PE’s are talked about.  Feel free to watch the entire video if you want to learn more!

 

Finally, let’s walk through how to set up VVOLs on your Nimble array.  There are a few pre-requisites before you can start:

  • NOS version 3.x or newer
  • vSphere 6.x or newer

Here’s the step by step process:

  1. Connect to web interface of local array
  2. Click on Administration -> VMware integration
  3. Fill in the following information
    • vCenter Name (this can be a vanity name- doesn’t have to be the address of the host)
    • choose the proper subnet on your Nimble array to communicate with vCenter
    • vCenter Host (FQDN or IP address)
    • Credentials
    • Check Web Client and VASA Provider
    • Click Save (This registers vCenter with the storage array and installs the VASA 2.0 provider)
  4. Navigate to Manage -> Storage Pools
  5. Select the Pool in which you want to create the VVOLs (for most sites this will be default)
  6. Click New Folder
  7. Change the Management Type to VMware Virtual Volumes
  8. Give the folder a Name and Description
  9. Set the size of the folder
  10. Choose the vCenter that you registered above, then click Create

Now you have a storage container on the Nimble array that you can use to create VVOLs.  Let’s look at the VMware side now:

  1. Connect to the vSphere web client for your vCenter 6 instance (this will not work with the thick client)
  2. Navigate to Storage and highlight your ESX server
  3. Click on Datastores on the right side of the window
  4. Click on the icon to create a new datastore
  5. Select your location (datacenter) then click next
  6. Select type VVOL then click next
  7. You should see at least one container- click next.  If not, try rescanning your HBA’s in the web client and start again
  8. Assign which host(s) will need access to the VVOL and click next
  9. On the summary screen- click finish

You should now see a new datastore.  Now let’s create a VM in the datastore and see what it looks like in the Nimble web interface!

  1. In vCenter, navigate to hosts and clusters
  2. Right click on your host to create a new virtual machine
  3. Click next under creation type to create a new virtual machine
  4. Give the VM a name, select the location where it should be created and click next
  5. Select the VVOL no requirements policy under VM storage policy
  6. Select the VVOL datastore that is compatible and click next
  7. Select ESXI 6.0 and later under the VM compatibility dtrop down and click next
  8. Choose the appropriate guest OS family and version then click next
  9. Adjust the virtual hardware to meet your needs and click next
  10. At the summary screen, verify all settings are correct and click Finish

Now if you navigate to Manage volumes in your Nimble web interface you will see multiple volumes for each VM you created.  Instead of putting all the .vmdk, .vmx, .vswp and other files inside a single datastore on a single LUN, each object is it’s own volume.  This is what allows you to set performance policies on a per VM basis because each volume can be treated differently.  You can set high performance policy on your production VM’s and low performance on dev/test for example.  Normally you would have to split your VMs into separate datastores and manage the performance policies on a per datastore level.  The problem with this is that you still have no visibility into each VM in that datastore at the storage layer.  With VVOLs, you can see latency, throughput and even noisy neighbor information on a per VM basis in the Nimble web interface!

 

Nimble CASL filesystem overview

nimbleChassis

 

As a Nimble partner, Collier IT has worked with a number of customers to install, configure and deploy a number of Nimble arrays.  We also have a CS-300 and a CS-500 in our lab so I get to use it on a daily basis.  I wanted to take some time to give a quick overview of how Nimble’s CASL filesystem works and how it differentiates itself from other storage vendor technologies.

 

These days, you can’t throw a rock in the Storage industry without hitting about a dozen new vendors every month touting their flashy new storage array and how it’s better than all the others out there.  I’ve used a lot of them and they all seem to share some common pitfalls:

 

 

CI-Generic-Chart.jpg

  • Performance drops as the array fills up, sometimes at only 80% of capacity or less!

Nobody in their right mind would intentionally push their SAN up to 100% utilization.  It’s not that hard though to lose track of thin provisioning and how much actual storage you have left.  As most storage arrays get close to full capacity, they start to slow down.  The reason for this is based on how the filesystem architecture is designed to handle free space and how it writes to disk.  Consider NetApp’s WAFL filesystem for example.  It is based on the WIFS (Write In Free Space) concept.  As new blocks are written to disk, instead of overwriting blocks in place (which adds a TON of overhead), it is redirected to a new location on disk and an is written as a full stripe across the disk group.  It allows for fast sequential reads because the blocks are laid out in a very contiguous manner.  Once the amount of free space starts to diminish, you find that what’s left is very scattered and located in different locations on the array.  It’s not contiguous anymore and there is significantly more time spent seeking for reads and writes, slowing down the array.

One of the big benefits of the CASL filesystem is that it even though it is also a WIFS filesystem, it does not fill holes.  Instead, it uses a lightweight sweeping process to consolidate the holes into full stripe writes in free space in the background.  The filesystem is designed from the ground up to run these sweeps very efficiently, and it also utilizes flash disk to speed up the process even more.  What this does is allow the array to ALWAYS write in a full stripe.  What’s more- CASL is also able to combine blocks of different size into a full stripe.  The big benefit here is that you get a very low overhead method of performing compression inline that doesn’t slow the array down.

 

 

innacuracy.jpg

  • Write performance is inconsistent, especially when dealing with a lot of random write patterns

The CASL filesystem has been designed from the ground up to deal with one of the Achilles heel of most arrays: the cache design.  Typical arrays will employ flash or DRAM as a cache mechanism for a fast write acknowledgement.  This is all well and good until you get into a situation where you have a lot of random reads and writes over a sustained period of time at a high rate of throughput.  This is most I/O workloads now with virtualization and storage consolidation.  We aren’t just streaming video anymore folks- we’re updating dozens of storage profiles simultaneously each with their own read and write characteristics.  The problem with other storage array caching mechanisms is that once this sustained load gets to the point where the controller(s) can’t flush the cache to spinning disk as fast as the data is coming in, you get throttled.

Nimble has a different approach to caching that was designed from the ground up to be not only scalable but media agnostic.  It doesn’t matter if you’re writing to spinning disk or SSD.  Here’s a quick breakdown:

  1. Write is received at storage array and stored in the active controller’s NVRAM cache
  2. Write is mirrored to standby controller’s NVRAM
  3. At this point, the write is acknowledged
  4. Write is shadow copied to DRAM
  5. While the data is in DRAM, it is analyzed for compression.  If the data is a good candidate for compression, the array determines the best compression algorithm to use for that type of data and it is compressed.  If it’s not a good candidate for compression (JPG for example) then it will not be compressed at all.
  6. data is grouped into stripes of 4.5mb and then written to disk in a full RAID stripe write operation

Here, the big performance benefits are mainly reduction in I/O to spinning disk and targeted inline compression.  This is achieved because we’re not blindly flushing data to disk as cache mechanisms fill up.  Instead we analyze the writes in memory, compress them inline and write them out to disk in a much more efficient manner.  The compression leverages the processing power on the array that is capable of compressing at 300mb/sec per core or faster.  As a result- you experience orders of magnitude less IOPS from the controller to the disk due to both compressing of the data and the way data is written.  What would have been maybe 1000 IOPS is now reduced to as little as 10 IOPS in some cases!  This is why Nimble doesn’t have to spend a lot of money on 15k or even 10k SAS drives on the back end.

To protect against data loss before the data is written to disk, both controllers have super capacitors that will hold the contents of NVRAM safe until you restore power and then write to disk.  Redundant controllers also guard against data corruption/loss in the event of the primary controller failure.

ssd2.jpg

  • Poor SSD write life

 A common problem since SSD’s have come about is that they eventually “wear out” due to the fact that the NAND flash substrate can only sustain a finite number of erase cycles before it becomes unusable.  Without getting into all the details of things like write amplification, garbage collection and flash cell degredation, understand that the less you write to an SSD generally the better off it will be.  Due to the nature of how typical arrays utilize SSD as a cache layer, inherently there will be a lot of writing.

Like I described earlier when talking about write performance, Nimble designed their filesystem from the ground up to minimize the amount of writes to SSD or just disk in general.  A side benefit of that is the fact that they also don’t need to use more expensive SLC SSD’s in their arrays due to the lower amount of writes needed.

Managing-Poor-Performance

  • My read performance sucks, especially with random reads

Typical storage arrays employ multiple caching layers to help boost read performance.  It is understood that the worst case scenario is having to read all data from slow spinning disk.  Even the fastest 15k SAS drives can only sustain about 150-170 IOPS per drive.  So the standard drill is when a read request comes in, the cache layer is queried for the data and if it exists there and hasn’t been modified, is sent to the client.  This is the fastest read operation.  Next you go to the secondary cache- typically SSD(s).  The same thing happens- if the data is there, it’s read from slower SSD and served up to the client.  Finally if the data isn’t cached or if it has changed since it was in cache then you experience a “cache-miss” and data is read from slow spinning disk.

Nimble is smarter about how it handles caching.  First NVRAM (much faster than DRAM) is checked for the data.  Then DRAM is checked.  Flash cache is the next step- if data is found there it is checksummed and uncompressed on the fly, then returned.  Finally spinning disk serves up any missing data if none of it is in cache.  The beautiful thing about CASL is that it will keep track of read patterns and make a decision on whether or not the data that was just served up from disk should be held in a higher level cache.

I haven’t talked about all of the technologies that CASL employs let alone some of the other benefits of owning Nimble storage.  Suffice it to say I’m excited about the future of Nimble.

OVM Disaster Recovery In A Box (Part 4 of 5)

Now that you’ve touched a file inside the VM- we have a way to prove that the VM which will be replicated to the other side via replication is actually the one we created.  Apparently in my case, faith is overrated.

 

Now that I’ve fire-hosed a TON of information at you on how to set up your virtual prod and dr sites, this would be a good breaking point to talk a little about how the network looks from a 10,000 foot view.  Here’s a really simple diagram that should explain how things work.  And when I say simple, we’re talking crayon art here folks.  Really- does anyone have a link to any resources on the web or in a book that could help a guy draw better network diagrams?  Ok- I digress.. here’s the diagram:

OVM DR Network Diagram

 

One of the biggest take aways from this diagram highlights something that a LOT of people get confused about.  In OVM DR- you do NOT replicate OVM Manager, the POOL filesystem or the OVM servers on the DR side.  In other words, you don’t replicate the operating environment, only the contents therein (i.e. the VM’s via their storage repositories).  You basically have a complete implementation of OVM at each location just as if it were a standalone site.  The only difference is that some of the repositories are replicated.  The only other potential difference (and I don’t show it or deal with it in my simulation) is RAW lun’s presented to the VM.  Those would have to be replicated at the storage layer as well.

 

I’ve not bothered to mess up the diagram with the VM or Storage networks- you know they’re there and that they’re serving their purpose.  You can see that replication is configured between the PROD Repo LUN and a LUN in DR.  This would be considered an Active/Passive DR Solution.  In this scenario, I don’t show it but you could potentially have some DR workloads running at the DR site.  It isn’t replicated back to PROD but note the next sentence. Now, some companies might have a problem with shelling out all that money for the infrastructure at the DR site and have it sitting unused until a DR event occurred.  Those companies might just decide to run some of their workload in the DR site and have PROD be its DR.  In this Active/Active scenario, your workflow would be pretty much the same, there are just more VM’s and repositories at each site so you need to be careful and plan well.  Here is what an Active/Active configuration would look like:

OVM DR Network Diagram active active

 

Again- my article doesn’t touch on Active/Active but you could easily apply the stuff you learn in these 5 articles to accommodate an Active/Active configuraiton fairly easily.  We’ll be focusing on Active/Passive just as a reminder.  We now have a Virtual Machine running in PROD to facilitate our replication testing.  Make sure the VM runs and can ping the outside network so we know we have a viable machine.  Don’t be expecting lightning performance either, we’re running a VM inside a VM which is inside of a VM.  Not exactly recommended for production use.  Ok- DO NOT use this as your production environment.  There- all the folks who ignore the warnings on hair dryers about using them in the shower should be covered now.

 

Below are the high level steps used to fail over to your DR site.  Once you’ve accomplished this, make sure to remember failback.  Most people are usually so excited about getting the failover to work that they forget they’ll have to fail back at some point once things have been fixed in PROD.

 

FAILOVER (this works if you’re doing a controlled fail over or if a real failure at prod occurs):

  • Ensure all PROD resources are nominal and functioning properly
  • Ensure all DR resources are nominal and functioning properly
  • Ensure replication between PROD and DR ZFS appliances is in place and replicating
  • on ZFSDR1, Stop replication of PROD_REPO
  • on ZFSDR1, Clone PROD_REPO project to new project DRFAIL
  • Rescan physical disk on ovmdr1 (may have to reboot to see new LUN)
  • Verify new physical disk appears
  • Rename physical disk to PROD_REPO_FAILOVER
  • Take ownership of replicated repository in DR OVM Manager
  • Scan for VM’s in the unassigned VM’s folder
  • Migrate the VM to the DR pool
  • Start the VM
  • Check /var/tmp/ and make sure you see the ovmprd1 file that you touched when it was running in PROD.  This proves that it’s the same VM
  • Ping something on your network to establish network access
  • Ping or connect to something on the internet to establish external network access

 

FAILBACK:

  • Ensure all PROD resources are nominal and functioning properly
  • Ensure all DR resources are nominal and functioning properly
  • Restart replication in the opposite direction from ZFSDR1 to ZFSPRD1
  • Ensure replication finishes successfully
  • Rescan physical disks on ovmprd1
  • Verify your PROD Repo LUN is still visible and in good health
  • Browse the PROD Repo and ensure your VM(s) are there
  • Power on your VM’s in PROD and ensure that whatever data was modified while in DR has been replicated back to PROD successfully.
  • Ping something on your network to establish network access
  • Ping or connect to something on the internet to establish external network access

 

Now that we’ve shown you how all this works, I’ll summarize in part 5.

Resize OVM Repository on the fly

A quick three step process for resizing your repositories (increase only- no shrinking).

 

  1. resize LUN on your storage array
  2. rescan physical disks on each OVM server and verify the new LUN size is reflected
  3. log into each OVM server and run the following command against your repository LUN

# tunefs.ocfs2 -S /dev/mapper/{UUID of repository LUN}

NOTE: you can get the path needed for the command above from OVM Manager.  Highlight the repository and select the info perspective.  It will show you the path right on that screen!

Once you run the command above- go back into OVM Manager and verify that your repository has resized.  I’ve tested this process in my sandbox lab running OVM 3.3.2 however be careful and test in your environment before doing this to production.

The maximum OCFS2 volume size is 64T.  I don’t know if that means that’s the maximum repository size but I don’t see anything contrary to that so far so I’m going with it for now.

Map Physical Disks in OVM to their Page83 id

Here’s a quick awk script to map the output of “list PhysicalDisk” in the OVM CLI to show the physical disks name and it’s Page83 ID.  Run /tmp/doit on your OVM Manager server (remember to change the script to be executable first).

/tmp/doit:

#!/bin/bash
output=`ssh -l admin localhost -p 10000 "list physicaldisk"`

# take output of OVM CLI command "list physicaldisk", normalize the field separators
# and generate a table showing the physical disk to page83 mappings in a table
#
# field 3 = page83 id
# field 5 = Disk name
#

echo "$output" | grep id: | sed -e 's/  /:/g' | awk -F: -f /tmp/doit.awk

/tmp/doit.awk:

BEGIN { print "\nPhysical Disk Name   | Page83 ID                   "
for(c=0;c<56;c++) printf "-"
printf "\n" }
{ printf "%-20s | %-32s\n", $5, $3 }
END { for(c=0;c<56;c++) printf "-"
printf "\n" }

ZS3-2, IPMP and DirectNFS

oracle_zs3_2I’ve been working with a customer the last few days to help them set up their ZS3-2.  They have dual controllers and two storage trays with one storage pool.  They are in an active/passive configuration with 2 dual port 10GbE fiber adapters in each controller for a total of four active paths at one time.  They plan on running Oracle database 12c and storing the database files on the ZS3-2.  Additionally they will be using Oracle’s DirectNFS.

Knowing a little about how DirectNFS works will help here- it basically acts as a regular NFS client but runs inside the Oracle kernel.  Unlike the standard NFS client, dNFS does direct calls for only what it needs.  It supports concurrent direct I/O which bypasses the buffer cache and the OS entirely taking a lot of overhead out of the picture.  The biggest advantage it carries is the ability to spread network load between multiple NIC’s without the need to set up load balancing at layer 2 or 3 in the OS.

Knowing that dNFS can automatically load balance among multiple NIC’s, we can use IPMP on the ZS3-2 to take full advantage of it.  Basically this is what the network config looks like on the ZFS (click to enlarge):

zs3-2 ipmp 1

 

With this configuration, traffic is spread across the 4 10GbE adapters on the active controller.  In the event of a path failure, IPMP continues to pass traffic across the remaining paths, as does dNFS on the client side.  If the ZFS controller fails, all resources (including IP addresses) fail over to the second controller and traffic resumes from that point.

 

I’ve also been meaning to play around with vNIC’s on the ZS3-2 but haven’t had a chance just yet.  It allows you to stack multiple virtual NIC’s on top of a physical one to gain more use out of each NIC.  One benefit to this is related to management ports.  In the past, you had to dedicate at least 2 NIC’s on each controller as management.  Both ports on each controller had to be cabled, but only port 0 on controller 1 and port 1 on controller 2 would be used at the same time.  The other ports have to remain unused to allow for resources to fail over to the remaining node in the event of a hardware issue.  Now I can just put 2 vNIC’s on top of the first onboard NIC and allow the other three to be used for data traffic. For some customers, these onboard ports are the only data ports they have so each one counts!