Create VM in Oracle VM for x86 using NFS share

I’m using OVM Manager 3.4.2 and OVM Server 3.3.2 to test an upgrade for one of our customers.  I am using Starwind iSCSI server to present the shared storage to the cluster but in production you should use enterprise grade hardware to do this.  There’s an easier way to do this- create an HVM VM and install from an ISO stored in a repository.  Then power the VM off and change the type to PVM then power on.  This may not work with all operating systems however so I’m going over how to create a new PVM VM from an ISO image shared from an NFS server.

* Download ISO (I'm using Oracle Linux 6.5 64bit for this example)
* Copy ISO image to OVM Manager (any NFS server is fine)
* Mount ISO on the loopback device
# mount -o loop /var/tmp/V41362-01.iso /mnt

* Share the folder via NFS
# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]

# exportfs *:/mnt/

# showmount -e
Export list for ovmm:
/mnt *

* Create new VM in OVM Manager
* Edit VM properties and configure as PVM
* Set additional properties such as memory, cpu and network
* At the boot order tab, enter the network boot path formatted like this:
  nfs:{ip address or FQDN of NFS host}:/{path to ISO image top level directory}

For example, our NFS server is 10.2.3.4 and the path where I mounted the ISO is at /mnt.  Leave the {}'s off of course:

  nfs:10.2.3.4:/mnt 

You should be able to boot your VM at this point and perform the install of the OS.
Advertisements

DevOps – What is it?

devops-teamMy company is starting to be more and more involved in using some of the tools that make up the DevOps stack.  Part of this work involves using tools like Docker natively in combination with hardware solutions such as Nimble Storage to provide a solution to customers that integrates well with their adoption of agile principles and workflows.  Other engagements will be designed to actually educate our customers on the benefits and disciplines of an Agile Workflow and to help them achieve better synergy between the development and operations teams.

 

To that end, I’m going to be writing a series of posts on different components that make up DevOps from the tool perspective as we get more and more exposure to them.  A website called The Agile Admin has done a great job in defining just what DevOps is and what it means.  Below is a snippet from their website that pretty much covers it very well.

 

DevOps is a new term emerging from the collision of two major related trends. The first was also called “agile system administration” or “agile operations”; it sprang from applying newer Agile and Lean approaches to operations work.  The second is a much expanded understanding of the value of collaboration between development and operations staff throughout all stages of the development lifecycle when creating and operating a service, and how important operations has become in our increasingly service-oriented world

 

Having been in the IT industry for over a quarter century, I’ve seen the division between Development and Operations very clearly defined.  They are very distinct pillars of job functions which fall into categories like Storage, Server Ops, Networking, Development/Coding and such.  These pillars have very distinct borders that are enforced by things like policies, request forms, response SLA’s and standardized methods of deployment.  It isn’t too uncommon for projects to enter the planning, architecture and implementation phases and take a terrible amount of time to complete.  More concerning is the concept that each pillar which is responsible for it’s part has very little knowledge or understanding of other pillar’s functions.  This dramatically reduces the effectiveness of the whole project because it eliminates many touch points that could potentially identify a critical flaw in the final product.  It also eliminates the majority of perspectives of the team as a whole that almost always brings value when considered throughout the process.

 

This is somewhat of a learning process for me, but it’s based on observations and involvement in different aspects of the operations side of things.  I’ve seen the trend towards a more DevOps type model being enforced by the pure nature of new converged and hyperconverged systems.  What used to be strictly an operational function such as provisioning a LUN and presenting it to servers, is now being offered to the developer side of the house (think self provisioning).  While this has the effect of moving what used to be an operational function over to the developer making their job much easier and quick, it is by no means true DevOps.

 

I urge spirited conversation on this topic below- there are many contrasting perspectives regarding DevOps and Agile Workflow that I’m interested in hearing about.

Nimble PowerShell Toolkit

I was working on an internal project to test performance of a converged system solution.  The storage component is a Nimble AF7000 from which we’re presenting a number of LUNs.  There are almost 30 LUNs and I’ve had to create, delete and provision them a number of times throughout the project.  It became extremely tedious to do this through the WebUI so I decided to see if it could be scripted.

I know you can log into the nimble via ssh and basically do what I’m trying to do- and I did test this with success.  However I’ve recently had a customer who wanted to use PowerShell to perform some daily snapshot/clone operations for Oracle database running on windows (don’t ask).  We decided to leverage the Nimble PowerShell Toolkit to perform the operations right from the windows server.  The script was fairly straightforward, although we had to learn a little about PowerShell syntax and such.  I’ve included a sanitized script below that basically does what I need to.

$arrayname = "IP address or FQDN of array management address"
$nm_uid = "admin"
$nm_password = ConvertTo-SecureString -String "admin" -AsPlainText -Force
$nm_cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $nm_uid,$nm_password
$initiatorID = Get-NSInitiatorGroup -name {name of initiator group} | select -expandproperty id

# Import Nimble Tool Kit for PowerShell
import-module NimblePowerShellToolKit

# Connect to the array
Connect-NSGroup -group $arrayname -credential $nm_cred

# Create 10 DATA Disks
for ($i=1; $i -le 10; $i++) {
    New-NSVolume -Name DATADISK$i -Size 1048576 -PerfPolicy_id 036462b75de9a4f69600000000000000000000000e -online $true
    $volumeID = Get-NSVolume -name DATADISK$i | select -expandproperty id
    New-NSAccessControlRecord -initiator_group_id $initiatorID -vol_id $volumeID
}

# Create 10 RECO Disks
for ($i=1; $i -le 10; $i++) {
    New-NSVolume -Name RECODISK$i -Size 1048576 -PerfPolicy_id 036462b75de9a4f69600000000000000000000000e -online $true
    $volumeID = Get-NSVolume -name RECODISK$i | select -expandproperty id
    New-NSAccessControlRecord -initiator_group_id $initiatorID -vol_id $volumeID
}

# Create 3 GRID Disks
for ($i=1; $i -le 3; $i++) {
    New-NSVolume -Name GRIDDISK$i -Size 2048 -PerfPolicy_id 036462b75de9a4f69600000000000000000000000e -online $true
    $volumeID = Get-NSVolume -name GRIDDISK$i | select -expandproperty id
    New-NSAccessControlRecord -initiator_group_id $initiatorID -vol_id $volumeID
}

I also wrote a script to delete the LUNs below:

$arrayname = "IP address or FQDN of array management address"  
$nm_uid = "admin"
$nm_password = ConvertTo-SecureString -String "admin" -AsPlainText -Force
$nm_cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $nm_uid,$nm_password
$initiatorID = Get-NSInitiatorGroup -name {name of initiator group} | select -expandproperty id

# Import Nimble Tool Kit for PowerShell
import-module NimblePowerShellToolKit

# Connect to the array 
Connect-NSGroup -group $arrayname -credential $nm_cred


# Delete 10 DATA Disks
for ($i=1; $i -le 10; $i++) {
    Set-NSVolume -name DATADISK$i -online $false
    Remove-NSVolume -name DATADISK$i
}

# Delete 10 RECO Disks
for ($i=1; $i -le 10; $i++) {
    Set-NSVolume -name RECODISK$i -online $false
    Remove-NSVolume -name RECODISK$i 
}

# Delete 3 GRID Disks
for ($i=1; $i -le 3; $i++) {
    Set-NSVolume -name GRIDDISK$i -online $false
    Remove-NSVolume -name GRIDDISK$i 
}

Obviously you’ll have to substitute some of the values such as $arrayname, $nm_uid, $nm_password and $initiatorID (make sure you remove the {}’s when you put your value here). This is a very insecure method of storing your password but it was a quick and dirty solution at the time. There are ways to store the value of a password from a highly secured text file and encrypt it into a variable. Or if you don’t mind being interactive, you can skip providing the credentials and it will pop up a password dialog box for you to enter them every time the script runs.

It made the project go a lot faster- hopefully you can use this to model different scripts to do other things. The entire command set of the Nimble array is basically exposed through the toolkit so there’s not a whole lot you can’t do here that you could in the WebUI. When you download the toolkit- there is a README PDF that goes through all the commands. When in PowerShell, you can also get help for each of the commands. For example:

PS C:\Users\esteed> help New-NSVolume

NAME
    New-NSvolume

SYNOPSIS
    Create operation is used to create or clone a volume. Creating volumes requires name and size attributes. Cloning
    volumes requires clone, name and base_snap_id attributes where clone is set to true. Newly created volume will not
    have any access control records, they can be added to the volume by create operation on access_control_records
    object set. Cloned volume inherits access control records from the parent volume.


SYNTAX
    New-NSvolume [-name] <String> [-size] <UInt64> [[-description] <String>] [[-perfpolicy_id] <String>] [[-reserve]
    <UInt64>] [[-warn_level] <UInt64>] [[-limit] <UInt64>] [[-snap_reserve] <UInt64>] [[-snap_warn_level] <UInt64>]
    [[-snap_limit] <UInt64>] [[-online] <Boolean>] [[-multi_initiator] <Boolean>] [[-pool_id] <String>] [[-read_only]
    <Boolean>] [[-block_size] <UInt64>] [[-clone] <Boolean>] [[-base_snap_id] <String>] [[-agent_type] <String>]
    [[-dest_pool_id] <String>] [[-cache_pinned] <Boolean>] [[-encryption_cipher] <String>] [<CommonParameters>]


DESCRIPTION
    Create operation is used to create or clone a volume. Creating volumes requires name and size attributes. Cloning
    volumes requires clone, name and base_snap_id attributes where clone is set to true. Newly created volume will not
    have any access control records, they can be added to the volume by create operation on access_control_records
    object set. Cloned volume inherits access control records from the parent volume.


RELATED LINKS

REMARKS
    To see the examples, type: "get-help New-NSvolume -examples".
    For more information, type: "get-help New-NSvolume -detailed".
    For technical information, type: "get-help New-NSvolume -full".

You can also use the -detail parameter at the end to get a more complete description of each option. Additionally you can use -examples to see the commands used in real world situations. Have fun!