Oracle Announcements Coming

With Oracle OpenWorld coming up in a week or so, we’re used to Oracle using this event to make major announcements about new products, features or generally good stuff.  They’ve already quietly announced their next SPARC chip This year- keep your ears peeled for some very impactful but not all that shocking news about one of their current products. There will be a new product in particular that I’m particularly interested in.  It’s called the SuperCluster M8.  From what I’ve heard, it should be able to run circles around the current exadata.  Additionally, there have been announcements about the support life for Solaris as well.

 

I can’t say much more until it’s public knowledge, but I would enjoy some spirited discussion about it after it comes out.

 

 

Stay tuned!!

Advertisements

Oracle VM for x86: Hard Partitioning Hands On

As most of you likely know, Oracle has stringent licensing rules when it comes to running their software in a virtual environment.  With anything other than Oracle VM Server for x86, you basically have to license every core in the cluster (VMware, Hyper-V, etc).  With OVM, Oracle does accept a specific configuration that satisfies their definition of a “hard partition” where processor licensing is concerned.  This means that if you own 2 processor licenses for Oracle Database EE for example, and are running on a platform that has a .5 license multiplier (such as x86), you are entitled to run that software on 4 cores.

 

Here are the requirements to satisfy the hard partition I mentioned above (taken from a document that is linked in InfoDoc 1529408.1):

To conform to the Oracle hard partition licensing requirement, you must follow the instructions described in this white paper to bind vCPUs to physical CPU threads or cores.

<

p style=”padding-left:30px;”>Live migration of CPU pinned virtual machines to another Oracle VM Server is not permitted under the terms of the hard partitioning license. Consequently, for Oracle VM Release 3, any servers running CPU pinned guests must not be included in DRS (Distributed Resource Scheduler) and DPM (Distributed Power Management) policies.
When live migration is used in an Oracle VM server pool, hard partition licensing is not applicable. You must determine the number of virtual machines running the Oracle Software and then license the same number of physical servers (starting with the largest servers based on the CPU core count) up to the total number of the physical servers in the pool. For example, if a customer has a server pool with 32 servers and 20 virtual machines running Oracle Software within the server pool, the customer must license the 20 largest physical servers in the pool. If the customer is running 50 virtual machines with Oracle Software in a pool of 32 physical servers, they need only to license the 32 physical servers in the pool.

Live migration of other virtual machines with non-Oracle software within the server pool is not relevant to Oracle software hard partitioning or has no impact to how Oracle software license is calculated.

“Trusted Partitions” allow subset licensing without limitation on live migration, but only available on the approved Oracle Engineered Systems listed on Oracle licensing policies for partitioned environments.

 

There is more information in that document on how to actually perform the CPU pinning but we don’t need to get into that level of detail just yet.  To summarize- here are the key takeaways you should be aware of when considering using OVM for hard partitioning:

  • The use of hyperthreading or no hyperthreading is irrelevant to Oracle from a licensing perspective
  • vCPUs are bound or “pinned” to physical cores using an OVM Manager utility that must be downloaded and installed on your OVM Manager
  • Live Migration, DRS and DPM is not allowed for pinned VMs
  • You have to choose which vCPUs you want to PIN your VM to.  Be careful that you don’t accidentally pin more than one VM to a given set of vCPUs- it’s a completely valid configuration but your performance will go to hell due to contention in the CPU scheduler.
  • Get in the habit of pinning your secondary workloads (applications that don’t require hard partitions) to a set of unused vCPUs.  This way they can’t potentially run on the same vCPU that you just pinned your production database VM to.
  • Make sure when you bind vCPUs that you don’t accidentally cross core boundaries.  It only takes 1 vCPU running on a separate core to mess up your licensing costs.  See my blog post here to get an idea of what I mean.

 

The Real World

Now I want to show you a few things that they don’t talk about in the licensing documents that you are likely to run across in your life as an OVM administrator.

  • live migrate a pinned VM from one OVM Server to another

Capture 2

As you can see above, we have 4 VMs running in this cluster.  Below is an overview of prod_db1.  Take note of the ID field, we’ll use it later to identify the VM:

Capture1

We’re gonna use prod_db1 as our guinea pig for this experiment.  Currently prod_db1 is running on server OVM1 and is pinned to vCPUs 0-3 as noted in the vm.cfg snippet below:

Capture3

I also have a VM running on server ovm2 that is pinned to the very same vCPUs:

Capture4

One would think you cannot live migrate the VM from ovm1 to ovm2 because of the fact that prod_db3 is already pinned to the same vCPUs on ovm2?

Screenshot (7)

 

You certainly can perform the live migration.  Here’s what will happen:

  • The VM will successfully migrate to ovm2
  • prod_db1 will only run on vCPUs 0-3 on ovm2
  • prod_db3 will only run on vCPUs 0-3 on ovm2
  • your performance in both VMs will likely go down the drain
  • you will be out of compliance with Oracle hard partition licensing requirements

 

I’ve had a LOT of people ask me this question, so here’s your proof:

[root@ovm1 ~]# xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
0004fb00000600000632b8de1db5a014 3 0 20 -b- 3988.0 any cpu
0004fb00000600000632b8de1db5a014 3 1 21 -b- 133.8 any cpu
0004fb00000600008825773ba1661d01 2 0 0 -b- 3083.6 0-3
0004fb00000600008825773ba1661d01 2 1 3 -b- 308.1 0-3
Domain-0 0 0 0 r-- 63990.1 0
Domain-0 0 1 1 r-- 62421.0 1
Domain-0 0 2 2 -b- 16102.8 2
Domain-0 0 3 3 -b- 10355.7 3
Domain-0 0 4 4 -b- 2718.1 4
Domain-0 0 5 5 -b- 9427.4 5
Domain-0 0 6 6 -b- 5660.8 6
Domain-0 0 7 7 -b- 3932.0 7
Domain-0 0 8 8 -b- 2268.0 8
Domain-0 0 9 9 -b- 8477.9 9
Domain-0 0 10 10 -b- 4950.6 10
Domain-0 0 11 11 -b- 4304.6 11
Domain-0 0 12 12 -b- 2001.5 12
Domain-0 0 13 13 -b- 10321.1 13
Domain-0 0 14 14 -b- 5221.5 14
Domain-0 0 15 15 -b- 3515.0 15
Domain-0 0 16 16 -b- 2408.8 16
Domain-0 0 17 17 -b- 9905.2 17
Domain-0 0 18 18 -b- 6105.3 18
Domain-0 0 19 19 -b- 4504.2 19



[root@ovm2 ~]# xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
Domain-0 0 0 0 -b- 54065.1 0
Domain-0 0 1 1 -b- 10110.4 1
Domain-0 0 2 2 -b- 4909.4 2
Domain-0 0 3 3 -b- 6344.0 3
Domain-0 0 4 4 -b- 1012.4 4
Domain-0 0 5 5 -b- 6506.3 5
Domain-0 0 6 6 -b- 4163.1 6
Domain-0 0 7 7 -b- 1564.5 7
Domain-0 0 8 8 -b- 1367.5 8
Domain-0 0 9 9 -b- 14307.2 9
Domain-0 0 10 10 -b- 4068.7 10
Domain-0 0 11 11 -b- 1799.4 11
Domain-0 0 12 12 -b- 1731.3 12
Domain-0 0 13 13 -b- 5478.0 13
Domain-0 0 14 14 -b- 6983.5 14
Domain-0 0 15 15 -b- 5781.6 15
Domain-0 0 16 16 -b- 723.4 16
Domain-0 0 17 17 r-- 4922.6 17
Domain-0 0 18 18 r-- 3585.3 18
Domain-0 0 19 19 -b- 1705.8 19
0004fb0000060000c9e5303a8dc2c675 3 0 0 -b- 5556.6 0-3
0004fb0000060000c9e5303a8dc2c675 3 1 3 -b- 144.4 0-3
  • Now I live migrate prod_db1 from ovm1 to ovm2

Screenshot (8)Screenshot (9)

Screenshot (10)

 

Here is the new vcpu-list post-migration:

[root@ovm1 ~]# xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
0004fb00000600000632b8de1db5a014 3 0 20 -b- 4007.2 any cpu
0004fb00000600000632b8de1db5a014 3 1 21 -b- 134.4 any cpu
Domain-0 0 0 0 r-- 64376.4 0
Domain-0 0 1 1 r-- 62793.1 1
Domain-0 0 2 2 -b- 16201.5 2
Domain-0 0 3 3 -b- 10418.6 3
Domain-0 0 4 4 -b- 2743.2 4
Domain-0 0 5 5 -b- 9486.1 5
Domain-0 0 6 6 -b- 5702.4 6
Domain-0 0 7 7 -b- 3955.7 7
Domain-0 0 8 8 -b- 2279.8 8
Domain-0 0 9 9 -b- 8530.4 9
Domain-0 0 10 10 -b- 4984.4 10
Domain-0 0 11 11 -b- 4328.3 11
Domain-0 0 12 12 -b- 2013.2 12
Domain-0 0 13 13 -b- 10390.7 13
Domain-0 0 14 14 -b- 5257.2 14
Domain-0 0 15 15 -b- 3542.0 15
Domain-0 0 16 16 -b- 2422.3 16
Domain-0 0 17 17 -b- 9969.5 17
Domain-0 0 18 18 -b- 6150.0 18
Domain-0 0 19 19 -b- 4532.5 19



[root@ovm2 ~]# xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
0004fb00000600008825773ba1661d01 5 0 2 -b- 1.9 0-3
0004fb00000600008825773ba1661d01 5 1 1 -b- 0.2 0-3
Domain-0 0 0 0 -b- 54418.2 0
Domain-0 0 1 1 -b- 10228.5 1
Domain-0 0 2 2 -b- 4939.8 2
Domain-0 0 3 3 -b- 6373.9 3
Domain-0 0 4 4 -b- 1024.7 4
Domain-0 0 5 5 -b- 6547.6 5
Domain-0 0 6 6 -b- 4218.0 6
Domain-0 0 7 7 -b- 1596.2 7
Domain-0 0 8 8 -b- 1374.9 8
Domain-0 0 9 9 -b- 14341.6 9
Domain-0 0 10 10 -b- 4099.5 10
Domain-0 0 11 11 -b- 1822.6 11
Domain-0 0 12 12 -b- 1737.6 12
Domain-0 0 13 13 r-- 5513.4 13
Domain-0 0 14 14 -b- 7016.8 14
Domain-0 0 15 15 -b- 5814.6 15
Domain-0 0 16 16 -b- 731.6 16
Domain-0 0 17 17 -b- 4960.6 17
Domain-0 0 18 18 -b- 3617.2 18
Domain-0 0 19 19 -b- 1714.2 19
0004fb0000060000c9e5303a8dc2c675 3 0 3 -b- 5590.3 0-3
0004fb0000060000c9e5303a8dc2c675 3 1 0 -b- 145.6 0-3

 

You can see that both VMs are pinned to the same vCPUs and they’re still running just fine.  Like I said- it will technically work but you’re shooting yourself in the foot in multiple ways if you do this.  Also keep in mind- if you turn on HA for prod_db1 and ovm1 goes down, the VM will fail to start on ovm2 because of the cpu pinning.  Don’t say I didn’t warn you!

 

  • Apply CPU pinning to a VM online with no reboot

In OVM 3.2 and 3.3, you were able to apply CPU pinning to a VM live without having to restart it.  A bug emerged in OVM 3.4.1 and 3.4.2 that broke this.  However it was fixed in OVM 3.4.3.  So depending on which version of OVM you’re running, you may be able to pin your VMs without having to take a reboot.  Watch and be amazed!

 

Currently running OVM 3.3.3:

[root@ovm1 ~]# cat /etc/ovs-release
Oracle VM server release 3.3.3

 

ovm_vmcontrol utilities are installed:

[root@ovmm ovm_util]# pwd
/u01/app/oracle/ovm-manager-3/ovm_util
[root@ovmm ovm_util]# ls -la
total 44
drwxrwxr-x 5 root root 4096 Jul 2 2014 .
drwxr-xr-x 11 oracle dba 4096 Aug 29 13:04 ..
drwxrwxr-x 2 root root 4096 Jul 2 2014 class
drwxr-xr-x 2 root root 4096 Jul 2 2014 lib
drwxr-xr-x 3 root root 4096 Jul 2 2014 man
-rwxr-xr-x 1 root root 1229 Jul 2 2014 ovm_reporestore
-rwxr-xr-x 1 root root 1227 Jul 2 2014 ovm_vmcontrol
-rwxr-xr-x 1 root root 1245 Jul 2 2014 ovm_vmdisks
-rwxr-xr-x 1 root root 1245 Jul 2 2014 ovm_vmhostd
-rwxr-xr-x 1 root root 1246 Jul 2 2014 ovm_vmmessage
-rwxr-xr-x 1 root root 2854 Jul 2 2014 vm-dump-metrics

 

I have an existing VM that is currently allowed to run on any vCPU on the server:

[root@ovm1 ~]# xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
0004fb00000600000632b8de1db5a014 3 0 20 -b- 4012.8 any cpu
0004fb00000600000632b8de1db5a014 3 1 21 -b- 134.6 any cpu
Domain-0 0 0 0 -b- 64446.0 0
Domain-0 0 1 1 -b- 62820.1 1
Domain-0 0 2 2 -b- 16213.7 2
Domain-0 0 3 3 -b- 10426.0 3
Domain-0 0 4 4 -b- 2746.1 4
Domain-0 0 5 5 -b- 9499.3 5
Domain-0 0 6 6 -b- 5712.5 6
Domain-0 0 7 7 -b- 3960.2 7
Domain-0 0 8 8 -b- 2282.3 8
Domain-0 0 9 9 -b- 8541.0 9
Domain-0 0 10 10 -b- 4992.0 10
Domain-0 0 11 11 -b- 4334.6 11
Domain-0 0 12 12 -b- 2015.6 12
Domain-0 0 13 13 -b- 10404.4 13
Domain-0 0 14 14 -b- 5265.1 14
Domain-0 0 15 15 -b- 3546.7 15
Domain-0 0 16 16 -b- 2423.7 16
Domain-0 0 17 17 r-- 9983.8 17
Domain-0 0 18 18 -b- 6158.2 18
Domain-0 0 19 19 -b- 4536.8 19

 

Now let’s pin that VM to vcpu 8-11:

[root@ovmm ovm_util]# ./ovm_vmcontrol -u admin -p ******** -h localhost -v prod_db2 -c vcpuset -s 8-11
Oracle VM VM Control utility 2.0.1.
Connecting with a secure connection.
Connected.
Command : vcpuset
Pinning virtual CPUs
Pinning of virtual CPUs to physical threads '8-11' 'prod_db2' completed.

 

And here’s our proof that the pinning is applied immediately with no reboot:

[root@ovm1 ~]# xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
0004fb00000600000632b8de1db5a014 3 0 10 -b- 4013.6 8-11
0004fb00000600000632b8de1db5a014 3 1 8 -b- 134.6 8-11
Domain-0 0 0 0 -b- 64454.8 0
Domain-0 0 1 1 -b- 62823.2 1
Domain-0 0 2 2 -b- 16215.2 2
Domain-0 0 3 3 -b- 10427.0 3
Domain-0 0 4 4 -b- 2746.3 4
Domain-0 0 5 5 r-- 9500.6 5
Domain-0 0 6 6 -b- 5713.6 6
Domain-0 0 7 7 -b- 3960.6 7
Domain-0 0 8 8 -b- 2282.5 8
Domain-0 0 9 9 -b- 8542.9 9
Domain-0 0 10 10 -b- 4992.8 10
Domain-0 0 11 11 -b- 4335.0 11
Domain-0 0 12 12 -b- 2015.8 12
Domain-0 0 13 13 -b- 10406.7 13
Domain-0 0 14 14 -b- 5266.4 14
Domain-0 0 15 15 -b- 3547.2 15
Domain-0 0 16 16 -b- 2424.2 16
Domain-0 0 17 17 -b- 9984.8 17
Domain-0 0 18 18 -b- 6159.6 18
Domain-0 0 19 19 -b- 4537.6 19

 

You’ll just have to take my word that I didn’t reboot the VM inbetween the steps- which should be validated by the time column for that VM (note that it increased a little, not reset to 0).

 

 

Well- happy hunting for now!

OVM CPU Pinning

shutterstock_90181546

 

Oracle has published a few documents (2240035.1 and 2213691.1 for starters) about CPU pinning in relation to hard partitions for VMs running on OVM.  This is to avoid having to license every core on the server (like you have to with VMware) for Oracle products that are licensed per core or per user.

 

I’m going to provide an excel spreadsheet at the end of this post that will help you visualize which VM is pinned to which CPU and if there is any overlap.  When a VM is not pinned to a given CPU, it is allowed to run on any cpu within the constraints of the Xen scheduler and where it wants the VM to run.  It will take into account things like NUMA and core boundaries to avoid scheduling a VM in a way that is inefficient.

 

You will need to modify this spreadsheet to fit your server configuration.  Use the information in the ovm-hardpart-168217 document to figure out what your systems CPU topology looks like.

 

A couple things to keep in mind:

  • You cannot live migrate a VM that is pinned.  Technically it will work and the VM will migrate. but Oracle does not allow this based on the terms of their hard partitioning license.  See attached document ovm-hardpart-168217 at the end of this post for more information.
  • When you pin a VM to a vCPU or range of vCPUs, that VM can only run on those vCPUs.  However, if you have other VMs that are not pinned, they can run on any vCPU on the system- including the ones that you just pinned your production database to!  If you have a combination of pinned and unpinned VMs, pin all the other VMs to the range of vCPUs that you want to lock them to.  This way, they can’t run on any vCPUs that you’ve already pinned VMs to.
  • Remember that DOM0 has to be scheduled to run just like the other resources.  Based on how big your system is, OVM will run DOM0 on the first few vCPUs.  This shouldn’t be a problem unless your DOM0 is extremely busy doing work such as processing I/O for the VMs that are running and handling interrupts.  In this case, if you have VMs that are pinned to the same vCPUs as DOM0 you might have some performance problems.  I’ve outlined where DOM0 runs by default on the size system in the example.
  • Realize that you can pin more than one VM to a vCPU.  I wouldn’t recommend this for obvious performance reasons but it’s possible to do.  This is where the spreadsheet comes in handy.
  • If you’re installing the ovm utilities which provides ovm_vmcontrol, you may need to enable remote connections first.  If you get an error message stating that there is an error connecting to localhost, perform the steps below.  You have to pay attention to the version of the ovm utilites that you install.  The readme will show you which of the three (currently) versions to install based on the version of OVM you’re running.
  • Below are the steps to enable remote connections (this was taken from Douglas Hawthorne’s blog here).  Note that the steps below should be performed as the root user, not oracle:
[root@melbourne ~]# cd /u01/app/oracle/ovm-manager-3/bin
[root@melbourne bin]# ./secureOvmmTcpGenKeyStore.sh
Generate OVMM TCP over SSH key store by following steps:
Enter keystore password:
Re-enter new password:
What is your first and last name?
 [Unknown]: OVM
What is the name of your organizational unit?
 [Unknown]: melbourne
What is the name of your organization?
 [Unknown]: YAOCM
What is the name of your City or Locality?
 [Unknown]: Melbourne
What is the name of your State or Province?
 [Unknown]: Victoria
What is the two-letter country code for this unit?
 [Unknown]: AU
Is CN=OVM, OU=melbourne, O=YAOCM, L=Melbourne, ST=Victoria, C=AU correct?
 [no]: yes

Enter key password for <ovmm>
 (RETURN if same as keystore password):
Re-enter new password:
[root@melbourne bin]# ./secureOvmmTcp.sh
Enabling OVMM TCP over SSH service

Please enter the Oracle VM manager user name: admin

Please enter the Oracle VM manager user password:

Please enter the password for TCPS key store :

The job of enabling OVMM TCPS service is committed, please restart OVMM to take effect.





[root@melbourne ~]# service ovmm restart
Stopping Oracle VM Manager [ OK ]
Starting Oracle VM Manager [ OK ]

 

If you have any questions- feel free to post them here.  Good luck!

 

 

CPU pinning example

ovm-hardpart-168217

OVM Manager Cipher Mismatch fix

I was installing a virtual OVM 3.3.3 test environment the other day and when I got to logging into OVM Manager for the first time I got this error:

3ssOL

This has to due with the fact that most modern browsers have dropped support for the older RC4 encryption cipher which is what OVM Manager uses.  There is a “fix” until you update to a newer version that has this bug patched.  See InfoDoc 2099148.1 for all the details, but here’s the meat of it:

 

  • Make a backup of the Weblogic config file
# cd /u01/app/oracle/ovm-manager-3/domains/ovm_domain/config
# cp config.xml config.xml.bak

 

  • Add the following line to the cihpersuite section (search for ciphersuite)
<ciphersuite>TLS_RSA_WITH_AES_128_CBC_SHA</ciphersuite>

 

  • Restart the ovm manager service and all is well
# service ovmm restart

Virtualized ODA X6-2HA – working with VMs

It’s been awhile since I built a virtualized ODA with VMs on a shared repo so I thought I’d go through the basic steps.

  1. install the OS
    1. install Virtual ISO image
    2. configure networking
    3. install ODA_BASE patch
    4. deploy ODA_BASE
    5. configure networking in ODA_BASE
    6. deploy ODA_BASE with configurator
  2. create shared repository.  This is where your specific situation plays out.  Depending on your hardware you may have less or more space in DATA or RECO.  Your DBA will be able to tell you how much they need for each and where you can borrow a few terabytes (or however much you need) for your VMs
  3. (optionally) create a separate shared repository to store your templates.  This all depends on how many of the same kind of VM you’ll be deploying.  If it makes no sense to keep the templates around once you create your VMs then don’t bother with this step
  4. import template into repository
    1. download the assembly file from Oracle (it will unzip into an .ova archive file)
    2. ***CRITICAL*** copy the .ova to /OVS on either nodes’ DOM0, not into ODA_BASE
    3. import the assembly (point it to the file sitting in DOM0 /OVS)
  5. modify template config as needed (# of vCPUs, Memory, etc)
  6. clone the template to a VM
  7. add network to VM (usually net1 for first public network, net2 for second and net3+ for any VLANs you’ve created
  8. boot VM and start console (easiest way is to VNC into ODA_BASE and launch it from there)
  9. set up your hostname, networking, etc the way you want it
  10. reboot VM to ensure changes persist
  11. rinse and repeat as needed

If you need to configure HA, preferred node or any other things, this is the time to do it.

 

ODA Software – Closed for Business!

I’ve deployed a number of these appliances over the last couple years both virtualized and bare metal.  When people realize that Oracle Linux is running under the hood they sometimes think it’s ok to throw rpmforge up in there and have at it.  What’s worse is a customer actually tried to do a yum update on the OS itself from the Oracle public YUM repo!   Ack….

 

I guess I can see wanting to stay patched to the latest available kernel or version of tools, but it needs to be understood that this appliance is a closed ecosystem.  The beauty of patching the ODA is the fact that I don’t have to chase down all the firmware updates for HDD/SSD/NVM disks, ILOM, BIOS, etc…  That legwork has already been done for me.  Plus the fact that all the patches are tested as a unit together on each platform makes me able to sleep better at night.  Sure- the patches take about 4-5 hours all said and done, but when you’re done, you’re done!  I’m actually wondering if Oracle will eventually implement busybox or something like it for the command line interface to hide the OS layer from end users.  With their move to a web interface for provisioning of the ODA X6-2S/M/L it seems they’ve taken a step in that direction.

 

If you decide to add repositories to your ODA in order to install system utilities like sysstat and such, it’s generally ok, but I need to say this:  the Oracle hard line states that no additional software should be installed on the ODA at all.  In support of that statement, I will say that I’ve had problems patching when the Oracle public YUM repo is configured and I also ran into the expired RHN key error that started rearing its ugly head at the beginning of 2017.  Both of these are easily fixed, but why put yourself in that position in the first place?

 

Also, in closing I’d like to recommend to all my customers/readers that you make it a priority to patch your ODA at least once a year.  There are actual ramifications to being out of date that have bitten folks.  I can think of one case where the customers’ ODA hadn’t been updated in 3-4 years.  The customer experienced multiple Hard Drive failures within a week or two and because they had their ODA loaded to the kilt, the ASM rebuild was impacting performance dramatically.  The reason the drives failed so close to eachother and more importantly the way they failed was because of outdated disk firmware.  Newer firmware was available that changed the way disk failure was performed in that it was more sensitive to “blips” and failed out the disk instead of letting it continue to stay in service.  As a result, the disk was dying for awhile and causing degraded performance.  Another reason the disks probably failed early-ish is the amount of load they were placing on the system.  Anywho… just remember to patch ok?

 

 

Create VM in Oracle VM for x86 using NFS share

I’m using OVM Manager 3.4.2 and OVM Server 3.3.2 to test an upgrade for one of our customers.  I am using Starwind iSCSI server to present the shared storage to the cluster but in production you should use enterprise grade hardware to do this.  There’s an easier way to do this- create an HVM VM and install from an ISO stored in a repository.  Then power the VM off and change the type to PVM then power on.  This may not work with all operating systems however so I’m going over how to create a new PVM VM from an ISO image shared from an NFS server.

* Download ISO (I'm using Oracle Linux 6.5 64bit for this example)
* Copy ISO image to OVM Manager (any NFS server is fine)
* Mount ISO on the loopback device
# mount -o loop /var/tmp/V41362-01.iso /mnt

* Share the folder via NFS
# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]

# exportfs *:/mnt/

# showmount -e
Export list for ovmm:
/mnt *

* Create new VM in OVM Manager
* Edit VM properties and configure as PVM
* Set additional properties such as memory, cpu and network
* At the boot order tab, enter the network boot path formatted like this:
  nfs:{ip address or FQDN of NFS host}:/{path to ISO image top level directory}

For example, our NFS server is 10.2.3.4 and the path where I mounted the ISO is at /mnt.  Leave the {}'s off of course:

  nfs:10.2.3.4:/mnt 

You should be able to boot your VM at this point and perform the install of the OS.

Putting the Oracle SPARC M7 Chip through its paces

From time to time I get an opportunity to dive under the hood of some pretty cool technologies in my line of work.  Being an Oracle Platinum Partner, Collier IT specializes in Oracle based hardware and software solutions.  On the hardware side we work with Exadata, Oracle Database Appliance and the Oracle ZFS Appliance just to name a few.  We have a pretty nice lab that includes our own Exatada and ODA, and just recently a T7-2.

 

download (1)Featuring the new SPARC M7 chip released in October of 2015 with Software in Silicon technology, the M7-x and T7-x server line represents a huge leap forward in Oracle Database performance.  The difference between the M7 and T7 servers is basically size and power.  The chip itself is called M7, not to be confused with the server model M7-x.  The T7-x servers also use the same M7 processor.  Hopefully that clears up any confusion on this going forward.  Here’s a link to a datasheet that outlines the server line in more detail.

 

In addition to faster on-chip encryption and real time data integrity checking, SQL query acceleration provides an extremely compelling use case for consolidation while maintaining a high level of performance and security with virtually no overhead.  The SPARC line of processors has come a very long way indeed since it’s infancy.  Released in late 1987, it was designed from the start to provide a highly scalable architecture around which to build a compute package that ranged from embedded processors all the way up to large server based CPU’s while utilizing the same core instruction set.  The name SPARC itself stands for Scalable Processor ARChitecture.  Based on the RISC (Reduced Instruction Set Computer) architecture, operations are designed to be as simple as possible.  This helps achieve nearly one instruction per CPU cycle which allows for greater speed and simplicity of hardware.  Furthermore this helps promote consolidation of other functions such as memory management or Floating Point operations on the same chip.

 

Some of what the M7 chip is doing has actually been done in principle for decades.  Applications such as Hardware Video Acceleration or Cryptographic Acceleration leverage instruction sets hard coded into the processor itself yielding incredible performance.  Think of it as a CPU that has only one job in life- to do one thing and do it very fast.  Modern CPUs such as the Intel x86 cpu have many many jobs to perform and they have to juggle all of them at once.  They are very powerful however because of the sheer number of jobs they are asked to perform, they don’t really excel at any one thing.  Call them a jack of all trades and master of none.  The concept of what a dedicated hardware accelerator is doing for Video playback for example, is what Oracle is doing with Database Instructions such as SQL in the M7 chip.  The M7 processor is still a general purpose CPU, however with the ability to perform in hardware database related instructions at machine level speeds with little to no overhead.  Because of this, the SPARC M7 is able to outperform all other general purpose processors that have to timeshare those types of instructions along with all the other workloads they’re being asked to perform.

 

sprinting-runnerA great analogy would be comparing an athlete who competes in a decathlon to a sprint runner.  The decathlete is very good at running fast, however he needs to be proficient in 9 other areas of competition.  Because of this, the decathlete cannot possibly be as good at running fast as the sprinter because the sprinter is focusing on doing just one thing and being the best at it.  In the same vein, the M7 chip also performs SQL instructions like a sprinter.  The same applies to encryption and real time data compression.

 

Having explained this concept, we can now get into practical application.  The most common use case will be for accelerating Oracle Database workloads.  I’ll spend some time digging into that in my next article.  Bear in mind that there are also other applications such as crypto acceleration and hardware data compression that are accelerated as well.

 

Over the past few weeks, we’ve been doing some benchmark comparisons between 3 very different Oracle Database hardware configurations.  The Exadata (x5), the Oracle Database Appliance (x5) and an Oracle T7-2 are the three platforms that were chosen.  There is a white paper that Collier IT is in the process of developing which I will be a part of.  Because the data is not yet fully analyzed, I can’t go into specifics on the results.  What I can say is that the T7-2 performed amazingly well from a price/performance perspective compared to the other two platforms.

 

Stay tuned for more details on a new test with the S7 and a Nimble CS-500 array as well as a more in depth look at how the onboard acceleration works including some practical examples.

 

 

 

 

 

 

hjh

OVM Server for x86 version 3.4.2 released!

downloadOracle has just released the latest version of Oracle VM for x86 and announced it at OpenWorld.  There are some really cool additions that enhance the stability and useability of Oracle VM.  Here are some of the new features:

 

Installation and Upgrades

Oracle VM Manager support for previous Oracle VM Server releases
As of Oracle VM Release 3.4.2, Oracle VM Manager supports current and previous Oracle VM Server releases. For more information, see Chapter 6, Oracle VM Manager Support for Previous Oracle VM Server releases.

Infrastructure

Support for NVM Express (NVMe) devices
Oracle VM Server now discovers NVMe devices and presents them to Oracle VM Manager, where the NVMe device is available as a local disk that you can use to store virtual machine disks or create storage repositories.

The following rules apply to NVMe devices:

Oracle VM Server for x86
  • To use the entire NVMe device as a storage repository or for a single virtual machine physical disk, you should not partition the NVMe device.
  • To provision the NVMe device into multiple physical disks, you should partition it on the Oracle VM Server where the device is installed. If an NVMe device is partitioned then Oracle VM Manager displays each partition as a physical disk, not the entire device.

    You must partition the NVMe device outside of the Oracle VM environment. Oracle VM Manager does not provide any facility for partitioning NVMe devices.

  • NVMe devices can be discovered if no partitions exist on the device.
  • If Oracle VM Server is installed on an NVMe device, then Oracle VM Server does not discover any other partitions on that NVMe device.
Oracle VM Server for SPARC
  • Oracle VM Manager does not display individual partitions on an NVMe device but only a single device.

    Oracle recommends that you create a storage repository on the NVMe device if you are using Oracle VM Server for SPARC. You can then create as many virtual disks as required in the storage repository. However, if you plan to create logical storage volumes for virtual machine disks, you must manually create ZFS volumes on the NVMe device. See Creating ZFS Volumes on NVMe Devices in the Oracle VM Administration Guide.

Using Oracle Ksplice to update the dom0 kernel
Oracle Ksplice capabilities are now available that allow you to update the dom0 kernel for Oracle VM Server without requiring a reboot. Your systems remain up to date with their OS vulnerability patches and downtime is minimized. A Ksplice update takes effect immediately when it is applied. It is not an on-disk change that only takes effect after a subsequent reboot.

Note

This does not impact the underlying Xen hypervisor.

Depending on your level of support, contact your Oracle support representative for assistance before using Oracle Ksplice to update the dom0 kernel for Oracle VM Server. For more information, see Oracle VM: Using Ksplice Uptrack Document ID 2115501.1, on My Oracle Support at: https://support.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=2115501.1.

Extended SCSI functionality available for virtual machines
Oracle VM now provides additional support for SCSI functionality to virtual machines:

  • Linux guests can now retrieve vital product data (VPD) page 0x84 information from physical disks if the device itself makes it available.
  • Microsoft Windows Server guests can use SCSI-3 persistent reservation to form a Microsoft Failover Cluster in an upcoming Oracle VM Paravirtual Drivers for Microsoft Windows release. See the Oracle VM Paravirtual Drivers for Microsoft Windowsdocumentation for information about the availability of failover cluster capabilities on specific Microsoft Operating System versions.
Dom0 kernel upgraded
The dom0 kernel for Oracle VM Server is updated to Oracle Unbreakable Enterprise Kernel Release 4 Quarterly Update 2 in this release.

Package additions and updates
  • The ovmport-1.0-1.el6.4.src.rpm package is added to the Oracle VM Server ISO to support Microsoft Clustering and enable communication between Dom0 and DomU processes using the libxenstore API.
  • The Perl package is updated to perl-5.10.1-141.el6_7.1.src.rpm.
  • The Netscape Portable Runtime (NSPR) package is updated to nspr-4.11.0-1.el6.x86_64.rpm.
  • The openSCAP package is updated to openscap-1.2.8-2.0.1.el6.rpm.
  • The Linux-firmware package is updated to linux-firmware-20160616-44.git43e96a1e.0.12.el6.src.rpm.

Performance and Scalability

Oracle VM Manager performance enhancements
This release enhances the performance of Oracle VM Manager by reducing the number of non-critical events that Oracle VM Server sends to Oracle VM Manager when a system goes down.

Note

If you are running a large Oracle VM environment, it is recommended to increase the amount of memory allocated to the Oracle WebLogic Server. This ensures that adequate memory is available when required. See Increasing the Memory Allocated to Oracle WebLogic Server in the Oracle VM Administration Guide for more information.

Oracle VM Server for x86 performance optimization
For information on performance optimization goals and techniques for Oracle VM Server for x86, see Optimizing Oracle VM Server for x86 Performance, on Oracle Technology Network at: http://www.oracle.com/technetwork/server-storage/vm/ovm-performance-2995164.pdf.

Xen 4.4.4 performance and scalability updates
  • Improved memory allocation: Host system performance is improved by releasing memory more efficiently when tearing down domains, for example, migrating a virtual machine from one Oracle VM Server to another or deleting a virtual machine. This ensures that the host system can manage other guest systems more effectively without experiencing issues with performance.
  • Improved aggregate performance: Oracle VM Server now uses ticket locks for spinlocks, which improves aggregate performance on large scale machines with more than four sockets.
  • Improved performance for Windows and Solaris guests: Microsoft Windows and Oracle Solaris guests with the HVM or PVHVM domain type can now specify local APIC vectors to use as upcall notifications for specific vCPUs. As a result, the guests can more efficiently bind event channels to vCPUs.
  • Improved workload performance: Changes to the Linux scheduler ensure that workload performance is optimized in this release.
  • Improved grant locking: Xen-netback multi-queue improvements take advantage of the grant locking enhancements that are now available in Oracle VM Server Release 3.4.2.
  • Guest disk I/O performance improvements: Block scalability is improved through the implementation of the Xen block multi-queue layer.

Usability

Oracle VM Manager Rule for Live Migration
To prevent failure of live migration, and subsequent issues with the virtual machine environment, a rule has been added to Oracle VM Manager, as follows:

Oracle VM Manager does not allow you to perform a live migration of a virtual machine to or from any instance of Oracle VM Server with a Xen release earlier than xen-4.3.0-55.el6.22.18. This rule applies to any guest OS.

Table 3.1 Live Migration Paths between Oracle VM Server Releases using Oracle VM Manager Release 3.4.2

capture
 

Where the live migration path depends on the Xen release, you should review the following details:

Xen Release (from) Xen Release (to) Live Migration Available?
xen-4.3.0-55.el6.x86_64 xen-4.3.0-55.el6.0.17.x86_64 No
xen-4.3.0-55.el6.22.18.x86_64 and newer xen-4.3.0-55 Yes

For example, as a result of this live migration rule, all virtual machines in an Oracle VM server pool running Oracle VM Server Release 3.3.2 with Xen version xen-4.3.0-55.el6.22.9.x86_64 must be stopped before migrating to Oracle VM Server Release 3.4.2.

Tip

Run the following command on Oracle VM Server to find the Xen version:

# rpm -qa | grep "xen"
PVHVM hot memory modification
As of this release, it is possible to modify the memory allocated to running PVHVM guests without a reboot. Additionally, Oracle VM Manager now allows you to set the allocated memory to a value that is different to the maximum memory available.

Note
  • Hot memory modification is supported on x86-based PVHVM guests running on Linux OS and guests running on Oracle VM Server for SPARC. For x86-based PVHVM guests running on Oracle Solaris OS, you cannot change the memory if the virtual machine is running.
  • See the Oracle VM Paravirtual Drivers for Microsoft Windows documentation for information about the availability of hot memory modification on PVHVM guests that are running a Microsoft Windows OS. You must use a Windows PV Driver that supports hot memory modification or you must stop the guest before you modify the memory.
  • Oracle VM supports hot memory modification through Oracle VM Manager only. If you have manually created unsupported configurations, such as device passthrough, hot memory modification is not supported.

Security

  • Oracle MySQL patch update: This release of Oracle VM includes the July 2016 Critical Patch Update for MySQL. (23087189)
  • Oracle WebLogic patch update: This release of Oracle VM includes the July 2016 Critical Patch Update for WebLogic. (23087185)
  • Oracle Java patch update: This release of Oracle VM includes the July 2016 Critical Patch Update for Java. (23087198).
  • Xen security advisories: The following Xen security advisories are included in this release:
    • XSA-154 (CVE-2016-2270)
    • XSA-170 (CVE-2016-2271)
    • XSA-172 (CVE-2016-3158 and CVE-2016-3159)
    • XSA-173 (CVE-2016-3960)
    • XSA-175 (CVE-2016-4962)
    • XSA-176 (CVE-2016-4480)
    • XSA-178 (CVE-2016-4963)
    • XSA-179 (CVE-2016-3710 and CVE-2016-3712)
    • XSA-180 (CVE-2014-3672)
    • XSA-182 (CVE-2016-6258)
    • XSA-185 (CVE-2016-7092)
    • XSA-187 (CVE-2016-7094)
    • XSA-188 (CVE-2016-7154)

 

 

ODA Patching – get ahead of yourself?

I was at a customer site deploying an X5-2 ODA.  They are standardizing on the 12.1.2.6.0 patch level.  Even though 12.1.2.7.0 is currently the latest, they don’t want to be on the bleeding edge.  Recall that the 12.1.2.6.0 patch doesn’t include infrastructure patches (mostly firmware) so you have to install 12.1.2.5.0 first, run the –infra patch to get the firmware and then update to 12.1.2.6.0.

 

We unpacked the 12.1.2.5.0 patch on both systems and then had an epiphany.  Why don’t we just unpack the 12.1.2.6.0 patch as well and save some time later?  What could possibly go wrong?  Needless to say, when we went to install or even verify the 12.1.2.5.0 patch it complained as follows:

ERROR: Patch version must be 12.1.2.6.0

 

Ok, so there has to be a way to clean that patch off the system so I can use 12.1.2.5.0 right?  I stumbled across the oakcli manage cleanrepo command and thought for sure that would fix things up nicely.  Ran it and I got this output:

 


[root@CITX-5ODA-ODABASE-NODE0 tmp]# oakcli manage cleanrepo --ver 12.1.2.6.0
Deleting the following files...
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OAK/12.1.2.6.0/Base
Deleting the files under /DOM0OAK/12.1.2.6.0/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95000N/SF04/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95001N/SA03/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/WDC/WD500BLHXSUN/5G08/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H101860SFSUN600G/A770/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST360057SSUN600G/0B25/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H106060SDSUN600G/A4C0/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109060SESUN600G/A720/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/HUS1560SCSUN600G/A820/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA6SUN200G/A29A/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA4SUN400G/A29A/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/ZeusIOPs-es-G3/E12B/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF2EUSUN73G/9440/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.03.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.03.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4370-es-M2/3.0.16.22.f-es-r100119/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109090SESUN900G/A720/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF4EUSUN200G/944A/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240AS60SUN4.0T/A2D2/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240B520SUN4.0T/M554/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7280A520SUN8.0T/P554/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.03.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x005d/4.230.40-3739/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0097/06.00.02.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/Mellanox/0x1003/2.11.1280/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4170-es-M3/3.2.4.26.b-es-r101722/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4-2/3.2.4.46.a-es-r101689/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X5-2/3.2.4.52-es-r101649/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/HMP/2.3.4.0.1/Base
Deleting the files under /DOM0HMP/2.3.4.0.1/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /DOM0IPMI/1.8.12.4/Base
Deleting the files under /JDK/1.7.0_91/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/ASR/5.3.1/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OPATCH/12.1.0.1.0/Patches/6880880
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OPATCH/12.0.0.0.0/Patches/6880880
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OPATCH/11.2.0.4.0/Patches/6880880
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/GI/12.1.0.2.160119/Patches/21948354
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/12.1.0.2.160119/Patches/21948354
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/11.2.0.4.160119/Patches/21948347
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OEL/6.7/Patches/6.7.1
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OVM/3.2.9/Patches/3.2.9.1
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OVS/12.1.2.6.0/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.02.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.02.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/GI/12.1.0.2.160119/Base

 

So I assumed that this fixed the problem.  Nope…

 


[root@CITX-5ODA-ODABASE-NODE0 tmp]# oakcli update -patch 12.1.2.5.0 --verify

ERROR: Patch version must be 12.1.2.6.0

 

 

Ok so more searching the CLI manual and the oakcli help pages came up with bupkiss.  So I decided to do an strace of the oakcli command I had just ran.  As ususal- there was a LOT of garbage I didn’t care about or didn’t know what it was doing.  I did find however that it was reading the contents of a file that looked interesting to me:

 


[pid 5509] stat("/opt/oracle/oak/pkgrepos/System/VERSION", {st_mode=S_IFREG|0777, st_size=19, ...}) = 0
[pid 5509] open("/opt/oracle/oak/pkgrepos/System/VERSION", O_RDONLY) = 3
[pid 5509] read(3, "version=12.1.2.6.0\n", 8191) = 19
[pid 5509] read(3, "", 8191) = 0
[pid 5509] close(3) = 0
[pid 5509] fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0
[pid 5509] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f159799d000
[pid 5509] write(1, "\n", 1
) = 1
[pid 5509] write(1, "ERROR: Patch version must be 12."..., 40ERROR: Patch version must be 12.1.2.6.0
) = 40
[pid 5509] exit_group(0) = ?

 

There were a dozen or so lines after that, but I had what I needed.  Apparently /opt/oracle/oak/pkgrepos/System/VERSION contains the current version of the latest patch that has been unpacked.  The system software version is kept somewhere else because after I unpacked the 12.1.2.6.0 patch, I ran an oakcli show version and it reported 12.1.2.5.0.  But the VERSION file referenced earlier said 12.1.2.6.0.  I assume when I unpacked the 12.1.2.6.0 patch, it updates this file.  So what I wound up doing is changing the VERSION file back to 12.1.2.5.0 as well as deleting the folder /opt/oracle/oak/pkgrepos/System/12.1.2.6.0.  Once I did this, everything worked as I expected.  I was able to verify and install the –infra portion of 12.1.2.5.0 and continue on my merry way.

 

This highlights the fact that there isn’t a known way (to me at least) to delete an unpacked patch via oakcli or any python scripts I’ve been able to find yet.  Also- as an aside I tried just deleting the VERSION file assuming it would be rebuilt by oakcli and it didn’t.  I got this:

 


[root@CITX-5ODA-ODABASE-NODE0 System]# oakcli update -patch 12.1.2.5.0 --verify
ERROR : Couldn't find the VERSION file to extract the current allowed version

 

So I just recreated the file and all was good.  I was hoping that the oak software didn’t maintain some sort of binary formatted database that kept track of all this information- I think I got lucky in this case.  Hope this helps someone out in a pinch!

New Oracle ODA X6 configurations officially released today!

Today, Oracle has announced the release of two new ODA configurations squarely targeted at the S in SMB.  I blogged about this back on June 9th here.  A few differences to note:

  • Two new commands replace oakcli (oakcli is gone).
    • odacli – perform “lifecycle” activities for the ODA appliance (provisioning and configuring)
    • odaadmcli – administer and configure the running appliance attributes
  • All new web based user interface used to deploy appliance.  Command line obviously still available but not required anymore to deploy.
  • No more virtualization or shared storage on the Small and Medium configuration

 

I’m not sure if I’ll have a chance to lay hands on the new hardware any time soon but if I do I’ll definitely give first impressions here!

ODA X6-2 in the wild!

cw20v1-x62-3-2969092

It looks like Oracle has deployed their newest server (the X6-2) into the ODA appliance lineup now.  It’s already an option on the ExaData, BDA and ZDLRA.  There are now 3 different configurations available, 2 of which don’t include shared storage and have a much lower price point.  You can also run Oracle Database SE2 or EE on the two smaller configurations however neither one offers the virtualization option that’s been around since the original V1 ODA.

 

Here are the 3 options:

Oracle Database Appliance X6-2S ($18k):
One E5-2630 v4 2.2GHz 10 core CPU
6.4 TB (2 x 3.2 TB) NVMe SSDs *
128 GB (4 x 32 GB) DDR4-2400 Main Memory **
Two 480 GB SATA SSDs (mirrored) for OS
Two onboard 10GBase-T Ethernet ports
Dual-port 10GbE SFP+ PCIe

Notes: 
* You can add up to 2 more NVMe SSD’s for a total of 4
** An optional memory expansion kit is available that brings this configuration up to 384GB

 

Oracle Database Appliance X6-2M ($24k):
Two E5-2630 v4 2.2GHz 10 core CPUs
6.4 TB (2 x 3.2 TB) NVMe SSDs *
256 GB (8 x 32 GB) DDR4-2400 Main Memory **
Two 480 GB SATA SSDs (mirrored) for OS
Four onboard 10GBase-T Ethernet ports
Dual-port 10GbE SFP+ PCIe

Notes:
* You can add up to 2 more NVMe SSD’s for a total of 4
** An optional memory expansion kit is available that brings this configuration up to 768GB

 

Oracle Database Appliance X6-2HA (?):
TBD – information about this configuration isn’t available yet.  More info coming soon!

X5-2 ODA upgrade from 12.1.2.5.0 to 12.1.2.6.0 observations

Word on keyboard

More fun with patching!  So this time I’m doing a fresh virtualized install and I decided to take my own sage advice of installing 12.1.2.5.0 first to get the firmware patches.  I ran into a bunch of other issues which will be the topic of a different post but I digress.  I got 12.1.2.5.0 fully installed, ODA_BASE deployed, everything was happy.

 

Remember that starting with version 12.1.2.6.0, you have to patch each node separately with the –local option for the infra patches.  So I started the patch on node 0 and it got almost all the way to the end at step 12 where oakd is being patched.  I ran into the “known issue” in 888888.1 item 9:

9.  During the infra patching, after step 12 completed, IPMI, HMP done, if it appeared to be hang during Patching OAK with the following two lines
                               INIT: Sending processes the TERM signal
                               INIT: no more processes left in this runlevel
JDK is not patched, the infra patching is not complete to the end.  
Workaround:  To reboot the appeared hang node manually, then run 
# oakcli update -patch 12.1.2.6 –clean

# oakcli update -patch 12.1.2.6.0 –infra –local
To let it complete the infra patch cleanly.  

I waited about 30 minutes at this step before I started to wonder, and sure enough after checking some log files in /opt/oracle/oak/onecmd/tmp/ it thought oakd was fully patched.  What I found is that oakd gets whacked because the patch doesn’t fully complete.  After doing the reboot that’s recommended in the workaround above, sure enough oakd is not running.  What’s more- now when I boot ODA_BASE the console doesn’t get to the login prompt and you can’t do anything even though you can ssh in just fine.  So I ran the –clean option then kicked off the patch again.  This time it complained that oakd wasn’t running on the remote node.  It was in fact running on node1 but node0 oakd was not.  I suspect that when the ODA communicates to oakd between nodes it’s using the local oakd to do so.

 

So I manually restarted oakd by running /etc/init.d/init.oak restart and then oakd was running.  I rebooted ODA_BASE on node0 just to be sure everything was clean then kicked off the infra patch again.  This time it went all the way through and finished.  The problem now is that the ODA_BASE console is non responsive no matter what I do so I’ll be opening a case with Oracle support to get a WTF.  I’ll update this post with their answer/solution.  If I were a betting man I’d say they’ll tell me to update to 12.1.2.7.0 to fix it.  We’ll see…

 

As an aside- one of the things that 12.1.2.6.0 does is do an in-place upgrade of Oracle Linux 5.11 to version 6.7 for ODA_BASE.  I’ve never done a successful update that way and in fact, Red Hat doesn’t support it.  I guess I can see why they would want to do an update rather than a fresh install but it still feels very risky to me.

ODA Software v12.1.2.6.0 possible bug

I’ve been updating some X5-2 ODA’s for a customer of mine to version 12.1.2.6.0 in preparation for deployment.  I came across a stubborn bug that proved to be a little tricky to solve.  I was having a problem with ODA_BASE not fully completing the boot cycle after initial deployment and as a result I couldn’t get into the ODA_BASE console to configure firstnet.

 

The customer has some strict firewall rules for the network that these ODA’s sit in so I also couldn’t connect to the VNC console on port 5900 as a result.  If you’re gonna implement 12.1.2.6.0 on an X5-2 ODA, I’d recommend installing 12.1.2.5.0 first then update to 12.1.2.6.0..  I’ve not been able to determine for sure what the problem was- I originally thought it had something to do with firmware because 12.1.2.6.0 doesn’t update any of the firmware due to a big ODA_BASE OS version update from 5.11 to 6.7.  Apparently the thought was that the update would either be too big or take too long to download/install so they skip firmware in this release.  Here is the readme for the 12.1.2.6.0 update:

 

This Patch bundle consists of the Jan 2016 12.1.0.2.160119 GI Infrastructure and RDBMS – 12.1.0.2.160119, 11.2.0.4.160119, and 11.2.0.3.15.  The Grid Infrastructure release 12.1.0.2.160119 upgrade is included in this patch bundle.  The database patches 12.1.0.2.160119, 11.2.0.4.160119, 11.2.0.3.15 and 11.2.0.2.12 are included in this patch bundle. Depending on the current version of the system being patched, usually all other infrastructure components like Controller, ILOM, BIOS, and disk firmware etc will also be patched; due to this release focus on the major OS update from OL5 to OL6.7; all other infrastructure components will not be patches.  In a virtualized environment, usually all other infrastructure components on dom0 will also be patched; in this release, we skip them.  To avoid all other infrastructure components version too far behind, the minimum version required is 12.1.2.5.0 for infra and GI.  As part of the Appliance Manager 12.1.2.6, a new parameter has been introduced to control the rolling of ODA patching from one node to another.  This is the first release to provide this functionality to allow you to control when the second node to be patched.

 

I wound up having to re-image to 12.1.2.5.0 and then upgraded as I stated above.  That fixed the problem.  I’m not sure- it may have been a bad download or a glitch in the ODA_BASE bundle because I checked against our own X5-2 ODA and it has the same problem with a fresh install of 12.1.2.6.0 and all of the firmware is up to date.  In hindsight, I probably should have given more credence to this message but it would have added hours onto the install process.  As it is, it more than doubled the time because of the troubleshooting needed.  Lesson learned…

Troubleshooting ODA Network connectivity

TroubleShootAudits1Setting up an ODA in a customer’s environment can either go very well or give you lots of trouble.  It all depends on having your install checklist completed, reviewed by the customer and any questions answered ahead of time.

 

I’ve installed dozens of ODA’s in a variety of configurations.  Ranging from a simple bare metal install to a complex virtualized install with multiple VMs and networks.  Now understand that I’m not a network engineer nor do I play one on TV, but I know enough about networking to have a civil conversation with a 2nd level network admin without getting too far out of my comfort zone. Knowing this- I can certainly appreciate the level of complexity involved in configuring and supporting an enterprise grade network.

 

Having said that, I find that when there are issues with a deployment, whether it’s an ODA, ZFS appliance, Exadata or other device, at least 80% of the time network misconfigurations are the culprit.  I can’t tell you how many times I’ve witnessed misconfigurations where the network admin swore up and down that they were set correctly but in fact were wrong.  It usually involves checking, re-checking and checking yet again to finally uncover the culprit.  Below, I’ll outline some of the snafu’s I’ve been involved with and the troubleshooting that can help resolve the issue.

 

Internet lock

 

  • Cabling: Are you sure the cables are all plugged into the right place?

Make sure that if you didn’t personally cable the ODA and you’re having network issues, don’t go too long without personally validating the cable configuration.  In this case, the fancy setup charts are a lifesaver!  On the X5-2 ODA’s for example, the InfiniBand private interconnect is replaced by the 10gb fiber ethernet option if the customer needs 10gb ethernet over fiber.  There is only one expansion slot available so unfortunately it’s either or.  As a result of this, the private interconnect is then facilitated by net0 and net1 with crossover cables (green and yellow) between the two compute nodes instead of the InfiniBand cables.  This can be missed very easily.  Also make sure the storage cables are all connected to the proper ports for your configuration- whether it’s one storage shelf or two.  This will typically be caught shortly after deploying the OS image whether it’s virtualized or bare metal.  There’s a storagetopology check that gets run during the install process that will catch most cabling mistakes but best not to chance it.

  • Switch configuration: Trunk port vs. Access port

When you configure a switch port, you need to tell the switch about what kind of traffic will pass through that port.  One of the important items is what network(s) does the server attached to this port need to talk on.  If you’re configuring a standalone physical server, chances are you won’t have a need to talk on more than one VLAN.  In this case, it’s usually appropriate to configure the switch port as an access port.  You can still put the server on a non-default VLAN (a VLAN other than 1) but the VLAN “tags” get stripped off at the switch and the server never sees them.

If however you’re setting up a VMware server or a machine that uses virtualization technology, it’s more likely that the VM’s that run on that server may indeed need to talk on more than one VLAN through the same network adapter(s).  In this case, you would need to set the port mode to trunked.  You then need to make sure to assign all the VLAN’s that the server will need to communicate on to that trunk port.  The server is then responsible for analyzing the VLAN tags and passing the traffic to the appropriate destination on the server.  This is one of the areas where the switch is usually configured incorrectly.  Most of the time, the network engineer fails to configure trunk mode on the port, forgets to assign the proper VLANs to the port or even setting a native VLAN on the port.

There is a difference between the default VLAN and a native VLAN.  The default VLAN is always present and is typically needed for intra-network device communication to take place.  Things like Cisco’s CDP protocol use this VLAN.  The Native VLAN, if configured, is treated similar to an access port from the perspective of the network adapter on the server.  The server NIC does not have to have a VLAN interface configured on top of it to be able to talk on the native VLAN.  If you want to talk on any other VLAN on this port however, you would need to configure a VLAN interface on the server to be able to receive those packets.  I’ve not seen the native VLAN used in a lot of configurations where more than one VLAN is needed, but it is most certainly a valid configuration.  Have the network team check these settings and make sure you understand how it should apply to your device.

  • Switch configuration: Aggregated ports vs. regular ports

Most switches have the ability to cobble together 2 to as many as 8 ports to provide higher throughput/utilization of the ports as well as redundancy at the same time.  This is referred to in different ways depending on your switch vendor.  Cisco calls it etherchannel, HP calls it Dynamic LACP trunking while extreme networks refer to it as sharing (LAG).  However you want to refer to it, it’s an implementation of a portion of the 802.3 IEEE standard which is commonly referred to as Link Aggregation or LACP (Link Aggregation Control Protocol).  Normally when you want to configure a pair of network interfaces on a server together, it’s usually to provide redundancy and avoid a SPOF (Single Point Of Failure).  I’ll refer to the standard Linux implementation mainly because I’m familiar with the different methods of load balancing that is typically employed.  This isn’t to say that other OS’s don’t have this capability (almost all do), I’m just not very experienced with all of them.

Active-Backup (Linux bonding driver mode=1) is a very simple implementation in which a primary interface is used for all traffic until that interface fails.  The traffic then moves over to the backup interface and communication is restored almost seamlessly.  There are other load balancing modes besides this one that don’t require any special configurations on the switch, each has their strengths and weaknesses.

LACP, which does require a specific configuration on the switch ports that are involved in order to work tends to be more performant while still maintaining redundancy.  The main reason for this is that there is out of band communication via the multicast group MAC address (01:80:c2:00:00:02) between the network driver on the server and the switch to keep both partners up to date on the status of the link.  This allows both ports to be utilized with an almost 50/50 split to evenly distribute the load between the totality of all the NICs in the LACP group effectively doubling (or better) throughput.

The reason I’m talking about this in the first place is because of the configuration that needs to be in place on the switch if you’re to use LACP.  If you configure your network driver for Active-Backup mode but the switch ports are set to LACP, you likely won’t see any packets at all on the server.  Likewise, if you have LACP configured on the server but the switch isn’t properly set up to handle it you’ll get the same result.  This is another setting that commonly gets misconfigured.  Other parameters such as STP (Spanning Tree Protocol), lacp_rate and passive vs. active LACP are some of the more common misconfigurations.  Also sometimes the configuration has to be split between two switches (again- no SPOF) and an MLAG configuration needs to be properly set up in order to allow LACP to work between switches.  Effectively, MLAG is one way of making two switches appear as one from a network protocol perspective and is required to span multiple switches within a LACP port group.  The take away here is to have the network admin verify their configuraiton on the switch(es) and ports involved.

  • Link speed: how fast can the server talk on the network?

Sometimes a server is capable of communicating at 10gb/s versus the more common 1gb/s either via copper or fiber media (most typically).  It used to be that you had to force switches to talk at 1gb/s in order for the server to negotiate that speed.  This was back when 1gb/s was newer and the handshake protocol that takes place between the NIC and the switch port at connection time was not as mature as it is now.  However, as a holdover from those halcyon days of yore, some network admins are prone to still set port speeds manually rather than letting them auto-negotiate like a good network admin should.  Thus you have servers connecting at 1gb/s when they should be running at 10gb/s.  Again- just something to keep in mind if you’re having speed issues.

  • Cable Quality: what speed is your cable rated at?

There are currently four common ratings for copper ethernet cables.  They are by no means the only ones but these are the most commonly used in datacenters.  They all have to do with how fast you can send data through the cables.  Cat 5 is capable of transmitting up to 1gb/s.  Cat 5e was an improvement on Cat 5 and introduced some enhancements that limited crosstalk (interference) between the 8 strands of a standard ethernet cable.  Cat 6 and 6a are further improvements on those standards, now allowing speeds of up to 10gb/s or more.  Basically the newer the Cat x number/letter the faster you can safely transmit data without data loss or corruption.  The reason I mention this is that I’ve been burned on more than one occasion when using cat5 for 1gb/s and had too much crosstalk which severely limited throughput and resulted in a lot of collisions.  Replacing the cable with a new cat 5 or higher rated cable almost always fixed the problem.  If you’re having communication problems, rule this out early on so you’re not chasing your tail in other areas.

  • IP Networking: Ensuring you have accurate network configurations

I’ve had a lot of problems in this area.  The biggest problem seems to be the fact that not all customers have taken the time to review and fill out the pre-install checklist.  This checklist prompts you for all the networking information you’ll need to do the install.  If you’ve been given IP information, before you tear your hair out make sure it’s correct.  I’ve been given multiple configurations at the same customer for the same appliance and each time there was something critical wrong that kept me from talking on the network.  Configuring VLAN’s can be especially trying because if you have it wrong, you just won’t see any traffic.  With regular non-VLAN configurations, If you put yourself on the wrong physical switch port or network, you can always sniff the network (tcpdump is now installed as part of the ODA software).  This doesn’t really work with VLAN traffic.  Other things to verify would be your subnet mask and default gateway.  If either of these are misconfigured, you’re gonna have problems.  Also as I mentioned earlier, don’t make the mistake of assuming you have to create a VLAN interface on the ODA just because you’re connected to a trunked port.  Remember the native VLAN traffic is passed on to the server with the VLAN tags stripped off so it uses a regular network interface (i.e. net1).

These are just some of the pitfalls you may encounter.  I hope some of this has helped!