Virtualized ODA X6-2HA – working with VMs

It’s been awhile since I built a virtualized ODA with VMs on a shared repo so I thought I’d go through the basic steps.

  1. install the OS
    1. install Virtual ISO image
    2. configure networking
    3. install ODA_BASE patch
    4. deploy ODA_BASE
    5. configure networking in ODA_BASE
    6. deploy ODA_BASE with configurator
  2. create shared repository.  This is where your specific situation plays out.  Depending on your hardware you may have less or more space in DATA or RECO.  Your DBA will be able to tell you how much they need for each and where you can borrow a few terabytes (or however much you need) for your VMs
  3. (optionally) create a separate shared repository to store your templates.  This all depends on how many of the same kind of VM you’ll be deploying.  If it makes no sense to keep the templates around once you create your VMs then don’t bother with this step
  4. import template into repository
    1. download the assembly file from Oracle (it will unzip into an .ova archive file)
    2. ***CRITICAL*** copy the .ova to /OVS on either nodes’ DOM0, not into ODA_BASE
    3. import the assembly (point it to the file sitting in DOM0 /OVS)
  5. modify template config as needed (# of vCPUs, Memory, etc)
  6. clone the template to a VM
  7. add network to VM (usually net1 for first public network, net2 for second and net3+ for any VLANs you’ve created
  8. boot VM and start console (easiest way is to VNC into ODA_BASE and launch it from there)
  9. set up your hostname, networking, etc the way you want it
  10. reboot VM to ensure changes persist
  11. rinse and repeat as needed

If you need to configure HA, preferred node or any other things, this is the time to do it.

 

ODA Software – Closed for Business!

I’ve deployed a number of these appliances over the last couple years both virtualized and bare metal.  When people realize that Oracle Linux is running under the hood they sometimes think it’s ok to throw rpmforge up in there and have at it.  What’s worse is a customer actually tried to do a yum update on the OS itself from the Oracle public YUM repo!   Ack….

 

I guess I can see wanting to stay patched to the latest available kernel or version of tools, but it needs to be understood that this appliance is a closed ecosystem.  The beauty of patching the ODA is the fact that I don’t have to chase down all the firmware updates for HDD/SSD/NVM disks, ILOM, BIOS, etc…  That legwork has already been done for me.  Plus the fact that all the patches are tested as a unit together on each platform makes me able to sleep better at night.  Sure- the patches take about 4-5 hours all said and done, but when you’re done, you’re done!  I’m actually wondering if Oracle will eventually implement busybox or something like it for the command line interface to hide the OS layer from end users.  With their move to a web interface for provisioning of the ODA X6-2S/M/L it seems they’ve taken a step in that direction.

 

If you decide to add repositories to your ODA in order to install system utilities like sysstat and such, it’s generally ok, but I need to say this:  the Oracle hard line states that no additional software should be installed on the ODA at all.  In support of that statement, I will say that I’ve had problems patching when the Oracle public YUM repo is configured and I also ran into the expired RHN key error that started rearing its ugly head at the beginning of 2017.  Both of these are easily fixed, but why put yourself in that position in the first place?

 

Also, in closing I’d like to recommend to all my customers/readers that you make it a priority to patch your ODA at least once a year.  There are actual ramifications to being out of date that have bitten folks.  I can think of one case where the customers’ ODA hadn’t been updated in 3-4 years.  The customer experienced multiple Hard Drive failures within a week or two and because they had their ODA loaded to the kilt, the ASM rebuild was impacting performance dramatically.  The reason the drives failed so close to eachother and more importantly the way they failed was because of outdated disk firmware.  Newer firmware was available that changed the way disk failure was performed in that it was more sensitive to “blips” and failed out the disk instead of letting it continue to stay in service.  As a result, the disk was dying for awhile and causing degraded performance.  Another reason the disks probably failed early-ish is the amount of load they were placing on the system.  Anywho… just remember to patch ok?

 

 

Putting the Oracle SPARC M7 Chip through its paces

From time to time I get an opportunity to dive under the hood of some pretty cool technologies in my line of work.  Being an Oracle Platinum Partner, Collier IT specializes in Oracle based hardware and software solutions.  On the hardware side we work with Exadata, Oracle Database Appliance and the Oracle ZFS Appliance just to name a few.  We have a pretty nice lab that includes our own Exatada and ODA, and just recently a T7-2.

 

download (1)Featuring the new SPARC M7 chip released in October of 2015 with Software in Silicon technology, the M7-x and T7-x server line represents a huge leap forward in Oracle Database performance.  The difference between the M7 and T7 servers is basically size and power.  The chip itself is called M7, not to be confused with the server model M7-x.  The T7-x servers also use the same M7 processor.  Hopefully that clears up any confusion on this going forward.  Here’s a link to a datasheet that outlines the server line in more detail.

 

In addition to faster on-chip encryption and real time data integrity checking, SQL query acceleration provides an extremely compelling use case for consolidation while maintaining a high level of performance and security with virtually no overhead.  The SPARC line of processors has come a very long way indeed since it’s infancy.  Released in late 1987, it was designed from the start to provide a highly scalable architecture around which to build a compute package that ranged from embedded processors all the way up to large server based CPU’s while utilizing the same core instruction set.  The name SPARC itself stands for Scalable Processor ARChitecture.  Based on the RISC (Reduced Instruction Set Computer) architecture, operations are designed to be as simple as possible.  This helps achieve nearly one instruction per CPU cycle which allows for greater speed and simplicity of hardware.  Furthermore this helps promote consolidation of other functions such as memory management or Floating Point operations on the same chip.

 

Some of what the M7 chip is doing has actually been done in principle for decades.  Applications such as Hardware Video Acceleration or Cryptographic Acceleration leverage instruction sets hard coded into the processor itself yielding incredible performance.  Think of it as a CPU that has only one job in life- to do one thing and do it very fast.  Modern CPUs such as the Intel x86 cpu have many many jobs to perform and they have to juggle all of them at once.  They are very powerful however because of the sheer number of jobs they are asked to perform, they don’t really excel at any one thing.  Call them a jack of all trades and master of none.  The concept of what a dedicated hardware accelerator is doing for Video playback for example, is what Oracle is doing with Database Instructions such as SQL in the M7 chip.  The M7 processor is still a general purpose CPU, however with the ability to perform in hardware database related instructions at machine level speeds with little to no overhead.  Because of this, the SPARC M7 is able to outperform all other general purpose processors that have to timeshare those types of instructions along with all the other workloads they’re being asked to perform.

 

sprinting-runnerA great analogy would be comparing an athlete who competes in a decathlon to a sprint runner.  The decathlete is very good at running fast, however he needs to be proficient in 9 other areas of competition.  Because of this, the decathlete cannot possibly be as good at running fast as the sprinter because the sprinter is focusing on doing just one thing and being the best at it.  In the same vein, the M7 chip also performs SQL instructions like a sprinter.  The same applies to encryption and real time data compression.

 

Having explained this concept, we can now get into practical application.  The most common use case will be for accelerating Oracle Database workloads.  I’ll spend some time digging into that in my next article.  Bear in mind that there are also other applications such as crypto acceleration and hardware data compression that are accelerated as well.

 

Over the past few weeks, we’ve been doing some benchmark comparisons between 3 very different Oracle Database hardware configurations.  The Exadata (x5), the Oracle Database Appliance (x5) and an Oracle T7-2 are the three platforms that were chosen.  There is a white paper that Collier IT is in the process of developing which I will be a part of.  Because the data is not yet fully analyzed, I can’t go into specifics on the results.  What I can say is that the T7-2 performed amazingly well from a price/performance perspective compared to the other two platforms.

 

Stay tuned for more details on a new test with the S7 and a Nimble CS-500 array as well as a more in depth look at how the onboard acceleration works including some practical examples.

 

 

 

 

 

 

hjh