Virtualized ODA X6-2HA – working with VMs

It’s been awhile since I built a virtualized ODA with VMs on a shared repo so I thought I’d go through the basic steps.

  1. install the OS
    1. install Virtual ISO image
    2. configure networking
    3. install ODA_BASE patch
    4. deploy ODA_BASE
    5. configure networking in ODA_BASE
    6. deploy ODA_BASE with configurator
  2. create shared repository.  This is where your specific situation plays out.  Depending on your hardware you may have less or more space in DATA or RECO.  Your DBA will be able to tell you how much they need for each and where you can borrow a few terabytes (or however much you need) for your VMs
  3. (optionally) create a separate shared repository to store your templates.  This all depends on how many of the same kind of VM you’ll be deploying.  If it makes no sense to keep the templates around once you create your VMs then don’t bother with this step
  4. import template into repository
    1. download the assembly file from Oracle (it will unzip into an .ova archive file)
    2. ***CRITICAL*** copy the .ova to /OVS on either nodes’ DOM0, not into ODA_BASE
    3. import the assembly (point it to the file sitting in DOM0 /OVS)
  5. modify template config as needed (# of vCPUs, Memory, etc)
  6. clone the template to a VM
  7. add network to VM (usually net1 for first public network, net2 for second and net3+ for any VLANs you’ve created
  8. boot VM and start console (easiest way is to VNC into ODA_BASE and launch it from there)
  9. set up your hostname, networking, etc the way you want it
  10. reboot VM to ensure changes persist
  11. rinse and repeat as needed

If you need to configure HA, preferred node or any other things, this is the time to do it.

 

ODA Software – Closed for Business!

I’ve deployed a number of these appliances over the last couple years both virtualized and bare metal.  When people realize that Oracle Linux is running under the hood they sometimes think it’s ok to throw rpmforge up in there and have at it.  What’s worse is a customer actually tried to do a yum update on the OS itself from the Oracle public YUM repo!   Ack….

 

I guess I can see wanting to stay patched to the latest available kernel or version of tools, but it needs to be understood that this appliance is a closed ecosystem.  The beauty of patching the ODA is the fact that I don’t have to chase down all the firmware updates for HDD/SSD/NVM disks, ILOM, BIOS, etc…  That legwork has already been done for me.  Plus the fact that all the patches are tested as a unit together on each platform makes me able to sleep better at night.  Sure- the patches take about 4-5 hours all said and done, but when you’re done, you’re done!  I’m actually wondering if Oracle will eventually implement busybox or something like it for the command line interface to hide the OS layer from end users.  With their move to a web interface for provisioning of the ODA X6-2S/M/L it seems they’ve taken a step in that direction.

 

If you decide to add repositories to your ODA in order to install system utilities like sysstat and such, it’s generally ok, but I need to say this:  the Oracle hard line states that no additional software should be installed on the ODA at all.  In support of that statement, I will say that I’ve had problems patching when the Oracle public YUM repo is configured and I also ran into the expired RHN key error that started rearing its ugly head at the beginning of 2017.  Both of these are easily fixed, but why put yourself in that position in the first place?

 

Also, in closing I’d like to recommend to all my customers/readers that you make it a priority to patch your ODA at least once a year.  There are actual ramifications to being out of date that have bitten folks.  I can think of one case where the customers’ ODA hadn’t been updated in 3-4 years.  The customer experienced multiple Hard Drive failures within a week or two and because they had their ODA loaded to the kilt, the ASM rebuild was impacting performance dramatically.  The reason the drives failed so close to eachother and more importantly the way they failed was because of outdated disk firmware.  Newer firmware was available that changed the way disk failure was performed in that it was more sensitive to “blips” and failed out the disk instead of letting it continue to stay in service.  As a result, the disk was dying for awhile and causing degraded performance.  Another reason the disks probably failed early-ish is the amount of load they were placing on the system.  Anywho… just remember to patch ok?

 

 

Create VM in Oracle VM for x86 using NFS share

I’m using OVM Manager 3.4.2 and OVM Server 3.3.2 to test an upgrade for one of our customers.  I am using Starwind iSCSI server to present the shared storage to the cluster but in production you should use enterprise grade hardware to do this.  There’s an easier way to do this- create an HVM VM and install from an ISO stored in a repository.  Then power the VM off and change the type to PVM then power on.  This may not work with all operating systems however so I’m going over how to create a new PVM VM from an ISO image shared from an NFS server.

* Download ISO (I'm using Oracle Linux 6.5 64bit for this example)
* Copy ISO image to OVM Manager (any NFS server is fine)
* Mount ISO on the loopback device
# mount -o loop /var/tmp/V41362-01.iso /mnt

* Share the folder via NFS
# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]

# exportfs *:/mnt/

# showmount -e
Export list for ovmm:
/mnt *

* Create new VM in OVM Manager
* Edit VM properties and configure as PVM
* Set additional properties such as memory, cpu and network
* At the boot order tab, enter the network boot path formatted like this:
  nfs:{ip address or FQDN of NFS host}:/{path to ISO image top level directory}

For example, our NFS server is 10.2.3.4 and the path where I mounted the ISO is at /mnt.  Leave the {}'s off of course:

  nfs:10.2.3.4:/mnt 

You should be able to boot your VM at this point and perform the install of the OS.

SSH Tunneling with PuTTY

From time to time I have a need to connect to a system inside another remote network (usually my work).  Normally I just ssh in and then jump to the machine I need to be on.  That’s all fine and dandy if you don’t need a GUI.  What if you need to be on the GUI console of the target machine inside the firewall and the firewall doesn’t allow the port you need to use?

 

Enter VNC and PuTTY.  You aren’t limited to doing this with PuTTY or VNC.  It’s just that a majority of my work is done from a windows machine and I refuse to install the bloated CYGWIN app on my machine just to get an ssh command line session.  Bah.. that’s a story for another day.  Anyway- SSH tunnels can be a bit confusing to the lay person so I thought I’d do a graphical illustration to help that out.

 

In this scenario, I will be using my laptop at home to connect into a landing pad UNIX machine at work.  I will then open a tunnel to another machine inside the remote network that will establish a connection to the VNC server running on that machine.  I won’t go into how to set up a VNC Server on linux as there are plenty of tutorials out there that will cover it.  The one thing I will say is make sure you use a password when you start it up.  This is a visual example of what the connection looks like:

 

capture

 

Here are some enlarged views so you can see what’s going on.  First we start PuTTY on the laptop.  I’ll show an example of what options you need to select inside the Putty connection later.  Once the tunnel is in place, fire up your favorite VNC client and point it to 127.0.0.1 or localhost on port 59001:

capture1

We pointed our VNC client to the address and port of the tunnel we just created, so the traffic is sent through the tunnel into the external Landing Pad and being forwarded on into the remote network:

capture2

Finally, the tunnel terminates on the server inside the remote network and connects the tunnel to port 5901 on that machine:

capture3

 

It may seem odd to connect your VNC client to the laptop’s localhost address in order to reach the target machine.  This is because you’re sending that traffic through the SSH tunnel that we set up rather than pointing it directly to the server you want to reach.

 

Now I’ll show you how to configure PuTTY to create the tunnel.  First, fire up Putty and populate the username and IP address of the landing pad server in our example (substitute yours of course).  Leave the port at 22:

capture4

 

Next, scroll down on the left hand side in the Category window and select Tunnels.  Here, populate the source port (59001 in my example), the IP address of the final destination server along with the port you want to connect to on that machine (5901 in my example).  Remember, you aren’t putting the IP address of the landing pad here- we want the target server in the Destination field. Once you have the Source port and Destination fields filled in, click Add and it will pop into the window as seen below:

capture5

 

To establish the tunnel, click Open. This will launch the PuTTY terminal and prompt you for your password.  In this screenshot, I’m using root to log in however generally it’s a good idea to use a non-privileged user to log into any machine:

 

capture6

Once you see the user prompt and you’re logged in, the tunnel is now in place.  Keep in mind that this SSH session you have open is the only thing keeping that tunnel open.  If you log out of the shell, it also tears down the tunnel so keep this window open while you’re using the tunnel.

 

The next step is to launch a VNC Viewer on your laptop and point it to your local machine on port 59001:

capture7

Click the connect button and you should see the next window prompting you for the password you set up earlier:

capture8

Finally, once you click OK you will be brought to your VNC Desktop on the machine inside the remote network!

capture9

 

So let’s take a step back and review what we’ve effectively done here:

 

Start VNC server:

We have to start a VNC server on the target computer, along with configuring a password to keep everyone else out.  This would have to be done separately.

 

Establish Tunnel:

We first establish the tunnel from the laptop, through the landing pad and finally to the remote server.  I’m making the obvious assumption here that you have the landing pad accessible to the internet on port 22 and that you have an SSH server running that will accept such connections.  You’re effectively logging into the landing pad just like you would on any other day.  The difference here is that we’re also telling PuTTY to set up a tunnel for us pointing to the remote server as well.  Aside from that- your login session will look and feel just the same.

 

Launch VNC Client:

We then start the VNC client on our laptop.  Normally, we would point it directly to the server we want to VNC into.  In our case, we created a tunnel that terminates on your laptop at port 59001.  So we connect our VNC client to the laptop (localhost or 127.0.0.1 should work) and point it to port 59001 instead of the standard port 5901.  The VNC client doesn’t care how the traffic is getting to the VNC server, it just does its job.

Think of this SSH tunnel as kind of a wormhole if that type of thing were to actually exist.  The traditional method of connecting to your remote endpoint would be similar to pointing our space shuttle towards the Andromeda galaxy which is about 2.5 million light years away.  It’s essentially not possible to get there- similar to a firewall that is blocking us.  But what if there were a wormhole that terminated near Earth that ended in the Andromeda galaxy?  If we were to point our space shuttle into the wormhole, theoretically we would pop out the other side at our target.

 

If you do plan on doing something like this, make sure you network administrator is ok with it.  They may detect the traffic as malicious if they’re not sure where it’s coming from and you may wind up in trouble.  I hope this helps give a basic understanding of how SSH Tunnels work.

 

 

 

 

 

Linux Exploit: How Making Your System More Secure Can Hurt You

51zah9831tlWith all the security exploits in the wild these days, it pays to protect your data.  One would think that encrypting your filesystems would be a good step in the right direction.  Normally it would, but in thstar_trek_ii_the_wrath_of_khan_2009_dvd_cover_region_1is case not so much.  Unlike Iron Maiden’s nod to the doomsday clock in the song 2 minutes to midnight, now it doesn’t even take that long to compromise a system.  No- watch out, here comes Cryptsetup.  We’ll do it for you in 30 seconds!  Ok, maybe too many references to British heavy metal bands and cult classic movies but you get the point.

 


CVE-2016-4484: Cryptsetup Initrd root Shell
was first revealed to “the public” about a week ago at the DeepSec security conference held in Austria and a few days later on the web at large.  The kicker is that it only applies if you have encrypted your system partition.  There is a much more detailed writeup here on how it works, how to tell if you’re vulnerable and how to fix it.  The good news is apparently not many people are encrypting their system partitions because I can’t believe that someone hasn’t even by accident ran across this until now (who hasn’t accidentally left something pressing the enter key on their keyboard like a book or something like that).

Dirty COW Linux Vulnerability – CVE-2016-5195

dirty_cow

A newly reported exploit in the memory mapping section of the Kernel has been reported.  It’s actually been in the kernel for years but just recently became much more dangerous due to recent changes in the kernel structure.  Here’s the alert from Red Hat’s website:

 

Red Hat Product Security has been made aware of a vulnerability in the Linux kernel that has been assigned CVE-2016-5195. This issue was publicly disclosed on October 19, 2016 and has been rated as Important.

Background Information

A race condition was found in the way the Linux kernel’s memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings. An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.

This could be abused by an attacker to modify existing setuid files with instructions to elevate privileges. An exploit using this technique has been found in the wild.

 

Here’s a great description of how the exploit works in a 12 minute youtube video

 

Patch patch patch!!

ODA Patching – get ahead of yourself?

I was at a customer site deploying an X5-2 ODA.  They are standardizing on the 12.1.2.6.0 patch level.  Even though 12.1.2.7.0 is currently the latest, they don’t want to be on the bleeding edge.  Recall that the 12.1.2.6.0 patch doesn’t include infrastructure patches (mostly firmware) so you have to install 12.1.2.5.0 first, run the –infra patch to get the firmware and then update to 12.1.2.6.0.

 

We unpacked the 12.1.2.5.0 patch on both systems and then had an epiphany.  Why don’t we just unpack the 12.1.2.6.0 patch as well and save some time later?  What could possibly go wrong?  Needless to say, when we went to install or even verify the 12.1.2.5.0 patch it complained as follows:

ERROR: Patch version must be 12.1.2.6.0

 

Ok, so there has to be a way to clean that patch off the system so I can use 12.1.2.5.0 right?  I stumbled across the oakcli manage cleanrepo command and thought for sure that would fix things up nicely.  Ran it and I got this output:

 


[root@CITX-5ODA-ODABASE-NODE0 tmp]# oakcli manage cleanrepo --ver 12.1.2.6.0
Deleting the following files...
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OAK/12.1.2.6.0/Base
Deleting the files under /DOM0OAK/12.1.2.6.0/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95000N/SF04/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95001N/SA03/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/WDC/WD500BLHXSUN/5G08/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H101860SFSUN600G/A770/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST360057SSUN600G/0B25/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H106060SDSUN600G/A4C0/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109060SESUN600G/A720/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/HUS1560SCSUN600G/A820/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA6SUN200G/A29A/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA4SUN400G/A29A/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/ZeusIOPs-es-G3/E12B/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF2EUSUN73G/9440/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.03.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.03.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4370-es-M2/3.0.16.22.f-es-r100119/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109090SESUN900G/A720/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF4EUSUN200G/944A/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240AS60SUN4.0T/A2D2/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240B520SUN4.0T/M554/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7280A520SUN8.0T/P554/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.03.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x005d/4.230.40-3739/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0097/06.00.02.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/Mellanox/0x1003/2.11.1280/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4170-es-M3/3.2.4.26.b-es-r101722/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4-2/3.2.4.46.a-es-r101689/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X5-2/3.2.4.52-es-r101649/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/HMP/2.3.4.0.1/Base
Deleting the files under /DOM0HMP/2.3.4.0.1/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /DOM0IPMI/1.8.12.4/Base
Deleting the files under /JDK/1.7.0_91/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/ASR/5.3.1/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OPATCH/12.1.0.1.0/Patches/6880880
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OPATCH/12.0.0.0.0/Patches/6880880
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OPATCH/11.2.0.4.0/Patches/6880880
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/GI/12.1.0.2.160119/Patches/21948354
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/12.1.0.2.160119/Patches/21948354
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/11.2.0.4.160119/Patches/21948347
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OEL/6.7/Patches/6.7.1
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OVM/3.2.9/Patches/3.2.9.1
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/OVS/12.1.2.6.0/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.02.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/thirdpartypkgs/Firmware/Controller/LSI-es-Logic/0x0072/11.05.02.00/Base
Deleting the files under $OAK_REPOS_HOME/pkgrepos/orapkgs/GI/12.1.0.2.160119/Base

 

So I assumed that this fixed the problem.  Nope…

 


[root@CITX-5ODA-ODABASE-NODE0 tmp]# oakcli update -patch 12.1.2.5.0 --verify

ERROR: Patch version must be 12.1.2.6.0

 

 

Ok so more searching the CLI manual and the oakcli help pages came up with bupkiss.  So I decided to do an strace of the oakcli command I had just ran.  As ususal- there was a LOT of garbage I didn’t care about or didn’t know what it was doing.  I did find however that it was reading the contents of a file that looked interesting to me:

 


[pid 5509] stat("/opt/oracle/oak/pkgrepos/System/VERSION", {st_mode=S_IFREG|0777, st_size=19, ...}) = 0
[pid 5509] open("/opt/oracle/oak/pkgrepos/System/VERSION", O_RDONLY) = 3
[pid 5509] read(3, "version=12.1.2.6.0\n", 8191) = 19
[pid 5509] read(3, "", 8191) = 0
[pid 5509] close(3) = 0
[pid 5509] fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0
[pid 5509] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f159799d000
[pid 5509] write(1, "\n", 1
) = 1
[pid 5509] write(1, "ERROR: Patch version must be 12."..., 40ERROR: Patch version must be 12.1.2.6.0
) = 40
[pid 5509] exit_group(0) = ?

 

There were a dozen or so lines after that, but I had what I needed.  Apparently /opt/oracle/oak/pkgrepos/System/VERSION contains the current version of the latest patch that has been unpacked.  The system software version is kept somewhere else because after I unpacked the 12.1.2.6.0 patch, I ran an oakcli show version and it reported 12.1.2.5.0.  But the VERSION file referenced earlier said 12.1.2.6.0.  I assume when I unpacked the 12.1.2.6.0 patch, it updates this file.  So what I wound up doing is changing the VERSION file back to 12.1.2.5.0 as well as deleting the folder /opt/oracle/oak/pkgrepos/System/12.1.2.6.0.  Once I did this, everything worked as I expected.  I was able to verify and install the –infra portion of 12.1.2.5.0 and continue on my merry way.

 

This highlights the fact that there isn’t a known way (to me at least) to delete an unpacked patch via oakcli or any python scripts I’ve been able to find yet.  Also- as an aside I tried just deleting the VERSION file assuming it would be rebuilt by oakcli and it didn’t.  I got this:

 


[root@CITX-5ODA-ODABASE-NODE0 System]# oakcli update -patch 12.1.2.5.0 --verify
ERROR : Couldn't find the VERSION file to extract the current allowed version

 

So I just recreated the file and all was good.  I was hoping that the oak software didn’t maintain some sort of binary formatted database that kept track of all this information- I think I got lucky in this case.  Hope this helps someone out in a pinch!

ODA X6-2 in the wild!

cw20v1-x62-3-2969092

It looks like Oracle has deployed their newest server (the X6-2) into the ODA appliance lineup now.  It’s already an option on the ExaData, BDA and ZDLRA.  There are now 3 different configurations available, 2 of which don’t include shared storage and have a much lower price point.  You can also run Oracle Database SE2 or EE on the two smaller configurations however neither one offers the virtualization option that’s been around since the original V1 ODA.

 

Here are the 3 options:

Oracle Database Appliance X6-2S ($18k):
One E5-2630 v4 2.2GHz 10 core CPU
6.4 TB (2 x 3.2 TB) NVMe SSDs *
128 GB (4 x 32 GB) DDR4-2400 Main Memory **
Two 480 GB SATA SSDs (mirrored) for OS
Two onboard 10GBase-T Ethernet ports
Dual-port 10GbE SFP+ PCIe

Notes: 
* You can add up to 2 more NVMe SSD’s for a total of 4
** An optional memory expansion kit is available that brings this configuration up to 384GB

 

Oracle Database Appliance X6-2M ($24k):
Two E5-2630 v4 2.2GHz 10 core CPUs
6.4 TB (2 x 3.2 TB) NVMe SSDs *
256 GB (8 x 32 GB) DDR4-2400 Main Memory **
Two 480 GB SATA SSDs (mirrored) for OS
Four onboard 10GBase-T Ethernet ports
Dual-port 10GbE SFP+ PCIe

Notes:
* You can add up to 2 more NVMe SSD’s for a total of 4
** An optional memory expansion kit is available that brings this configuration up to 768GB

 

Oracle Database Appliance X6-2HA (?):
TBD – information about this configuration isn’t available yet.  More info coming soon!

X5-2 ODA upgrade from 12.1.2.5.0 to 12.1.2.6.0 observations

Word on keyboard

More fun with patching!  So this time I’m doing a fresh virtualized install and I decided to take my own sage advice of installing 12.1.2.5.0 first to get the firmware patches.  I ran into a bunch of other issues which will be the topic of a different post but I digress.  I got 12.1.2.5.0 fully installed, ODA_BASE deployed, everything was happy.

 

Remember that starting with version 12.1.2.6.0, you have to patch each node separately with the –local option for the infra patches.  So I started the patch on node 0 and it got almost all the way to the end at step 12 where oakd is being patched.  I ran into the “known issue” in 888888.1 item 9:

9.  During the infra patching, after step 12 completed, IPMI, HMP done, if it appeared to be hang during Patching OAK with the following two lines
                               INIT: Sending processes the TERM signal
                               INIT: no more processes left in this runlevel
JDK is not patched, the infra patching is not complete to the end.  
Workaround:  To reboot the appeared hang node manually, then run 
# oakcli update -patch 12.1.2.6 –clean

# oakcli update -patch 12.1.2.6.0 –infra –local
To let it complete the infra patch cleanly.  

I waited about 30 minutes at this step before I started to wonder, and sure enough after checking some log files in /opt/oracle/oak/onecmd/tmp/ it thought oakd was fully patched.  What I found is that oakd gets whacked because the patch doesn’t fully complete.  After doing the reboot that’s recommended in the workaround above, sure enough oakd is not running.  What’s more- now when I boot ODA_BASE the console doesn’t get to the login prompt and you can’t do anything even though you can ssh in just fine.  So I ran the –clean option then kicked off the patch again.  This time it complained that oakd wasn’t running on the remote node.  It was in fact running on node1 but node0 oakd was not.  I suspect that when the ODA communicates to oakd between nodes it’s using the local oakd to do so.

 

So I manually restarted oakd by running /etc/init.d/init.oak restart and then oakd was running.  I rebooted ODA_BASE on node0 just to be sure everything was clean then kicked off the infra patch again.  This time it went all the way through and finished.  The problem now is that the ODA_BASE console is non responsive no matter what I do so I’ll be opening a case with Oracle support to get a WTF.  I’ll update this post with their answer/solution.  If I were a betting man I’d say they’ll tell me to update to 12.1.2.7.0 to fix it.  We’ll see…

 

As an aside- one of the things that 12.1.2.6.0 does is do an in-place upgrade of Oracle Linux 5.11 to version 6.7 for ODA_BASE.  I’ve never done a successful update that way and in fact, Red Hat doesn’t support it.  I guess I can see why they would want to do an update rather than a fresh install but it still feels very risky to me.

ODA Software v12.1.2.6.0 possible bug

I’ve been updating some X5-2 ODA’s for a customer of mine to version 12.1.2.6.0 in preparation for deployment.  I came across a stubborn bug that proved to be a little tricky to solve.  I was having a problem with ODA_BASE not fully completing the boot cycle after initial deployment and as a result I couldn’t get into the ODA_BASE console to configure firstnet.

 

The customer has some strict firewall rules for the network that these ODA’s sit in so I also couldn’t connect to the VNC console on port 5900 as a result.  If you’re gonna implement 12.1.2.6.0 on an X5-2 ODA, I’d recommend installing 12.1.2.5.0 first then update to 12.1.2.6.0..  I’ve not been able to determine for sure what the problem was- I originally thought it had something to do with firmware because 12.1.2.6.0 doesn’t update any of the firmware due to a big ODA_BASE OS version update from 5.11 to 6.7.  Apparently the thought was that the update would either be too big or take too long to download/install so they skip firmware in this release.  Here is the readme for the 12.1.2.6.0 update:

 

This Patch bundle consists of the Jan 2016 12.1.0.2.160119 GI Infrastructure and RDBMS – 12.1.0.2.160119, 11.2.0.4.160119, and 11.2.0.3.15.  The Grid Infrastructure release 12.1.0.2.160119 upgrade is included in this patch bundle.  The database patches 12.1.0.2.160119, 11.2.0.4.160119, 11.2.0.3.15 and 11.2.0.2.12 are included in this patch bundle. Depending on the current version of the system being patched, usually all other infrastructure components like Controller, ILOM, BIOS, and disk firmware etc will also be patched; due to this release focus on the major OS update from OL5 to OL6.7; all other infrastructure components will not be patches.  In a virtualized environment, usually all other infrastructure components on dom0 will also be patched; in this release, we skip them.  To avoid all other infrastructure components version too far behind, the minimum version required is 12.1.2.5.0 for infra and GI.  As part of the Appliance Manager 12.1.2.6, a new parameter has been introduced to control the rolling of ODA patching from one node to another.  This is the first release to provide this functionality to allow you to control when the second node to be patched.

 

I wound up having to re-image to 12.1.2.5.0 and then upgraded as I stated above.  That fixed the problem.  I’m not sure- it may have been a bad download or a glitch in the ODA_BASE bundle because I checked against our own X5-2 ODA and it has the same problem with a fresh install of 12.1.2.6.0 and all of the firmware is up to date.  In hindsight, I probably should have given more credence to this message but it would have added hours onto the install process.  As it is, it more than doubled the time because of the troubleshooting needed.  Lesson learned…

How to create VLANs in DOM0 on a virtualized ODA

Capture

I’ve been working with a local customer the last week or so to help them set up a pair of ODA’s in virtualized mode.  In one of the datacenters, they needed it to be on a VLAN- including DOM0.  Normally, I just configure net1 for the customer’s network and I’m off to the races.  In this case, there are a few additional steps we have to do.

First thing you’ll need to do is install the ODA software from the install media.  Once this is done, you need to log into the console since we don’t have any IP information configured yet.  Below is a high level checklist of the steps needed to complete this activity:

 

  • Determine which VLAN DOM0 needs to be on
  • Pick a name for the VLAN interface.  It doesn’t have to be eth2 or anything like that.  I usually go with “VLAN456” if my VLAN ID is 456 so it’s self descriptive.
  • Run the following command in DOM0 on node 0 (assuming your VLAN ID is 456)

# oakcli create vlan VLAN456 -vlanid 456 -if bond0

 

At this point, you’ll have the following structures in place on each compute node:

VLAN 1

 

We now have networking set up so that eth2 and eth3 are bonded together (bond0).  Then we put a VLAN bond interface (bond0.456) on top of the bond pair.  Finally we create a VLAN bridge (VLAN456) that can be used to forward that network into the VM, and also allow DOM0 to talk on that VLAN.   I’ve shown in the example above what it looks like to connect more than one VLAN to a bond pair.  If you need access to both VLAN’s from within DOM0 then each VLAN interface on each node will need an IP address assigned to it.  You’ll need to rerun configure firstnet for each interface.  Note also that if you need to access more than one VLAN from a bond pair,  you’ll need to set the switch ports that eth2 and eth3 are connected to into trunked mode so they can pass more than a single VLAN.  Your network administrator will know what this means.

 

 

After that’s in place, you can continue to deploy ODA_BASE, do a configure firstnet in ODA_BASE (remember to assign the VLAN interface to ODA_BASE), yadda yadda…

 

Then, as you configure ODA_BASE and create your VM(s), the NetBack and NetFront drivers are created that are responsible for plumbing the network into the VM.  Here’s a completed diagram with a VM that has access to both VLAN’s:

VLAN final

 

Happy Hunting!

 

 

UPDATE: The way this customer wound up configuring their switches at the end of the day was to put the ODA and ODA_BASE on the Native VLAN.  In this case, even though the switch port is trunked to have access to one or more VLAN’s at a time, the Native VLAN traffic is actually passed untagged down to the server.  This implies that you do not need a special VLAN interface on the ODA to talk on this network, just use the regular net1 or net2 interface.  Now, if you want to talk on any other VLANs through that switch port, you will need to follow the procedure above and configure a VLAN interface for that VLAN.

OVM Disaster Recovery In A Box (Part 4 of 5)

Now that you’ve touched a file inside the VM- we have a way to prove that the VM which will be replicated to the other side via replication is actually the one we created.  Apparently in my case, faith is overrated.

 

Now that I’ve fire-hosed a TON of information at you on how to set up your virtual prod and dr sites, this would be a good breaking point to talk a little about how the network looks from a 10,000 foot view.  Here’s a really simple diagram that should explain how things work.  And when I say simple, we’re talking crayon art here folks.  Really- does anyone have a link to any resources on the web or in a book that could help a guy draw better network diagrams?  Ok- I digress.. here’s the diagram:

OVM DR Network Diagram

 

One of the biggest take aways from this diagram highlights something that a LOT of people get confused about.  In OVM DR- you do NOT replicate OVM Manager, the POOL filesystem or the OVM servers on the DR side.  In other words, you don’t replicate the operating environment, only the contents therein (i.e. the VM’s via their storage repositories).  You basically have a complete implementation of OVM at each location just as if it were a standalone site.  The only difference is that some of the repositories are replicated.  The only other potential difference (and I don’t show it or deal with it in my simulation) is RAW lun’s presented to the VM.  Those would have to be replicated at the storage layer as well.

 

I’ve not bothered to mess up the diagram with the VM or Storage networks- you know they’re there and that they’re serving their purpose.  You can see that replication is configured between the PROD Repo LUN and a LUN in DR.  This would be considered an Active/Passive DR Solution.  In this scenario, I don’t show it but you could potentially have some DR workloads running at the DR site.  It isn’t replicated back to PROD but note the next sentence. Now, some companies might have a problem with shelling out all that money for the infrastructure at the DR site and have it sitting unused until a DR event occurred.  Those companies might just decide to run some of their workload in the DR site and have PROD be its DR.  In this Active/Active scenario, your workflow would be pretty much the same, there are just more VM’s and repositories at each site so you need to be careful and plan well.  Here is what an Active/Active configuration would look like:

OVM DR Network Diagram active active

 

Again- my article doesn’t touch on Active/Active but you could easily apply the stuff you learn in these 5 articles to accommodate an Active/Active configuraiton fairly easily.  We’ll be focusing on Active/Passive just as a reminder.  We now have a Virtual Machine running in PROD to facilitate our replication testing.  Make sure the VM runs and can ping the outside network so we know we have a viable machine.  Don’t be expecting lightning performance either, we’re running a VM inside a VM which is inside of a VM.  Not exactly recommended for production use.  Ok- DO NOT use this as your production environment.  There- all the folks who ignore the warnings on hair dryers about using them in the shower should be covered now.

 

Below are the high level steps used to fail over to your DR site.  Once you’ve accomplished this, make sure to remember failback.  Most people are usually so excited about getting the failover to work that they forget they’ll have to fail back at some point once things have been fixed in PROD.

 

FAILOVER (this works if you’re doing a controlled fail over or if a real failure at prod occurs):

  • Ensure all PROD resources are nominal and functioning properly
  • Ensure all DR resources are nominal and functioning properly
  • Ensure replication between PROD and DR ZFS appliances is in place and replicating
  • on ZFSDR1, Stop replication of PROD_REPO
  • on ZFSDR1, Clone PROD_REPO project to new project DRFAIL
  • Rescan physical disk on ovmdr1 (may have to reboot to see new LUN)
  • Verify new physical disk appears
  • Rename physical disk to PROD_REPO_FAILOVER
  • Take ownership of replicated repository in DR OVM Manager
  • Scan for VM’s in the unassigned VM’s folder
  • Migrate the VM to the DR pool
  • Start the VM
  • Check /var/tmp/ and make sure you see the ovmprd1 file that you touched when it was running in PROD.  This proves that it’s the same VM
  • Ping something on your network to establish network access
  • Ping or connect to something on the internet to establish external network access

 

FAILBACK:

  • Ensure all PROD resources are nominal and functioning properly
  • Ensure all DR resources are nominal and functioning properly
  • Restart replication in the opposite direction from ZFSDR1 to ZFSPRD1
  • Ensure replication finishes successfully
  • Rescan physical disks on ovmprd1
  • Verify your PROD Repo LUN is still visible and in good health
  • Browse the PROD Repo and ensure your VM(s) are there
  • Power on your VM’s in PROD and ensure that whatever data was modified while in DR has been replicated back to PROD successfully.
  • Ping something on your network to establish network access
  • Ping or connect to something on the internet to establish external network access

 

Now that we’ve shown you how all this works, I’ll summarize in part 5.

OVM Disaster Recovery In A Box (Part 3 of 5)

Continued from Part 2 of 5:

OVMMDR1

  • create a VM based on Oracle Linux 64bit OS
  • rename the VM to ovmmdr1
  • Give the VM 4gb memory and 2 cpu’s
  • Give the VM a 30gb hard drive
  • configure ovmmdr1 with the following network adapters:
    adapter 1: (Host Only) DR Management
    adapter 2: (NAT Network) DRPublic

  • boot the VM and install Oracle Linux 6.5 and select the Desktop server type.
    We do this so you have a GUI to log into- if that’s not a priority for you personally, then just pick Basic server

  • configure the VM with the following information:


Host Name: ovmmdr1
IP Address (eth0): 10.1.12.110
IP Netmask: 255.255.255.0
IP Address (eth1): 192.168.12.110
IP Netmask: 255.255.255.0
Default Router: 10.1.12.1
DNS Server: 127.0.0.1
Root Password: Way2secure

  • turn off iptables and selinux:


[root@ovmmdr1 ~]# service iptables stop ; chkconfig iptables off
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@ovmmdr1 ~]#

[root@ovmmdr1 ~]# setenforce 0
setenforce: SELinux is disabled
[root@ovmmdr1 ~]#

[root@ovmmdr1 ~]# vi /etc/selinux/config

NOTE: Set SELINUX=disabled in file below:

This file controls the state of SELinux on the system.

SELINUX= can take one of these three values:

enforcing – SELinux security policy is enforced.

permissive – SELinux prints warnings instead of enforcing.

disabled – No SELinux policy is loaded.

SELINUX=disabled

SELINUXTYPE= can take one of these two values:

targeted – Targeted processes are protected,

mls – Multi Level Security protection.

SELINUXTYPE=targeted

  • add following line to /etc/hosts

    10.1.12.110 ovmmdr1

  • set parameters in /etc/sysconfig/network

    NETWORKING=yes
    HOSTNAME=ovmmdr1
    GATEWAY=192.168.12.1

  • reboot to make selinux disabled permanently

  • attach the OVM Manager 3.3.2 install ISO to the VM
  • run the createOracle.sh script to prep the VM for the installation of OVM Manager


[root@ovmmdr1 mnt]# ./createOracle.sh
Adding group 'oinstall' with gid '54323' ...
groupadd: group 'oinstall' already exists
Adding group 'dba'
groupadd: group 'dba' already exists
Adding user 'oracle' with user id '54322', initial login group 'dba',
supplementary group 'oinstall' and home directory '/home/oracle' ...
User 'oracle' already exists ...
uid=54321(oracle) gid=54322(dba) groups=54322(dba),54321(oinstall)
Creating user 'oracle' succeeded ...
For security reasons, no default password was set for user 'oracle'.
If you wish to login as the 'oracle' user, you will need to set a password for this account.

Verifying user ‘oracle’ OS prerequisites for Oracle VM Manager …
oracle soft nofile 8192
oracle hard nofile 65536
oracle soft nproc 2048
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle soft core unlimited
oracle hard core unlimited
Setting user ‘oracle’ OS limits for Oracle VM Manager …
Altered file /etc/security/limits.conf
Original file backed up at /etc/security/limits.conf.orabackup
Verifying & setting of user limits succeeded …
Changing ‘/u01’ permission to 755 …
Changing ‘/u01/app’ permission to 755 …
Changing ‘/u01/app/oracle’ permission to 755 …
Modifying iptables for OVM
Adding rules to enable access to:
7002 : Oracle VM Manager https
54322 : Oracle VM Manager core via SSL
123 : NTP
10000 : Oracle VM Manager CLI Tool
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
iptables: Applying firewall rules: [ OK ]
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
iptables: Applying firewall rules: [ OK ]
Rules added.
[root@ovmmdr1 mnt]#

NOTE: you will need to gather the UUID of OVM Manager in production and install this instance
with that UUID. Run the following command on OVMMPRD1:

grep UUID /u01/app/oracle/ovm-manager-3/.config

  • run the runInstaller.sh script to install OVM Manager


[root@ovmmdr1 mnt]# ./runInstaller.sh -u {UUID from previous step}

Oracle VM Manager Release 3.3.2 Installer

Oracle VM Manager Installer log file:
/var/log/ovmm/ovm-manager-3-install-2015-03-04-170449.log

Please select an installation type:
1: Install
2: Upgrade
3: Uninstall
4: Help

Select Number (1-4): 1

Starting production with local database installation …

Verifying installation prerequisites …
*** WARNING: Recommended memory for the Oracle VM Manager server installation using Local MySql DB is 7680 MB RAM

One password is used for all users created and used during the installation.
Enter a password for all logins used during the installation: Way2secure
Enter a password for all logins used during the installation (confirm): Way2secure

Please enter your fully qualified domain name, e.g. ovs123.us.oracle.com, (or IP address) of your management server for SSL certification generation, more than one IP address are detected: 10.1.12.110 192.168.12.110 [ovmmdr1]: ovmmdr1

Verifying configuration …

Start installing Oracle VM Manager:
1: Continue
2: Abort

Select Number (1-2): 1

Step 1 of 9 : Database Software…
Installing Database Software…
Retrieving MySQL Database 5.6 …
Unzipping MySQL RPM File …
Installing MySQL 5.6 RPM package …
Configuring MySQL Database 5.6 …
Installing MySQL backup RPM package …

Step 2 of 9 : Java …
Installing Java …

Step 3 of 9 : Database schema …
Creating database ‘ovs’ …
Creating database ‘appfw’
Creating user ‘ovs’ for database ‘ovs’…
Creating user ‘appfw’ for database ‘appfw’

Step 4 of 9 : WebLogic and ADF…
Retrieving Oracle WebLogic Server 12c and ADF …
Installing Oracle WebLogic Server 12c and ADF …
Applying patches to Weblogic …

Step 5 of 9 : Oracle VM …
Installing Oracle VM Manager Core …
Retrieving Oracle VM Manager Application …
Extracting Oracle VM Manager Application …

Retrieving Oracle VM Manager Upgrade tool …
Extracting Oracle VM Manager Upgrade tool …
Installing Oracle VM Manager Upgrade tool …

Step 6 of 9 : Domain creation …
Creating Oracle WebLogic Server domain …
Starting Oracle WebLogic Server 12c …
Creating Oracle VM Manager user ‘admin’ …

Retrieving Oracle VM Manager CLI tool …
Extracting Oracle VM Manager CLI tool…
Installing Oracle VM Manager CLI tool …

Step 7 of 9 : Deploy …
Configuring Https Identity and Trust…
Deploying Oracle VM Manager Core container …
Configuring Client Cert Login…
Deploying Oracle VM Manager UI Console …
Deploying Oracle VM Manager Help …
Disabling HTTP access …

Step 8 of 9 : Oracle VM Tools …

Retrieving Oracle VM Manager Shell & API …
Extracting Oracle VM Manager Shell & API …
Installing Oracle VM Manager Shell & API …

Retrieving Oracle VM Manager Wsh tool …
Extracting Oracle VM Manager Wsh tool …
Installing Oracle VM Manager Wsh tool …

Retrieving Oracle VM Manager Tools …
Extracting Oracle VM Manager Tools …
Installing Oracle VM Manager Tools …
Copying Oracle VM Manager shell to ‘/usr/bin/ovm_shell.sh’ …
Installing ovm_admin.sh in ‘/u01/app/oracle/ovm-manager-3/bin’ …
Installing ovm_upgrade.sh in ‘/u01/app/oracle/ovm-manager-3/bin’ …

Step 9 of 9 : Start OVM Manager …
Enabling Oracle VM Manager service …
Shutting down Oracle VM Manager instance …
Starting Oracle VM Manager instance …
Waiting for the application to initialize …
Oracle VM Manager is running …

Please wait while WebLogic configures the applications…
Oracle VM Manager installed.

Installation Summary

Database configuration:
Database type : MySQL
Database host name : localhost
Database name : ovs
Database listener port : 49500
Database user : ovs

Weblogic Server configuration:
Administration username : weblogic

Oracle VM Manager configuration:
Username : admin
Core management port : 54321
UUID : 0004fb00000100006231d80f2ca9856b

Passwords:
There are no default passwords for any users. The passwords to use for Oracle VM Manager, Database, and Oracle WebLogic Server have been set by you during this installation. In the case of a default install, all passwords are the same.

Oracle VM Manager UI:
https://ovmmdr1:7002/ovm/console
Log in with the user ‘admin’, and the password you set during the installation.

Note that you must install the latest ovmcore-console package for your Oracle Linux distribution to gain VNC and serial console access to your Virtual Machines (VMs).
Please refer to the documentation for more information about this package.

For more information about Oracle Virtualization, please visit:
http://www.oracle.com/virtualization/

Oracle VM Manager installation complete.

Please remove configuration file /tmp/ovm_configcKjMF_.
[root@ovmmdr1 mnt]#

Capture

  • Install ovmcore-console-1.0-41.el6.noarch.rpm on ovmmdr1

yum install -y /var/tmp/ovmcore-console-1.0-41.el6.noarch.rpm

OVMPRD1
==============================================

  • create a VM based on Oracle Linux 64bit OS
  • rename the VM to ovmprd1
  • Give the VM 2gb memory and 2 cpu’s
  • Give the VM a 6gb hard drive
  • configure ovmprd1 with the following network adapters:
    adapter 1: (Host Only) Prod Management
    adapter 2: (NAT Network) ProdPublic
    adapter 3: (Host Only) Prod Storage
    adapter 4: (Host Only) Prod Storage

  • boot the VM and install OVM Server 3.3.2

  • configure the VM with the following settings:


Host Name: ovmprd1
IP Address (eth0): 10.1.11.101
IP Netmask: 255.255.255.0
Default Router: 10.1.11.1
DNS Server: 192.168.11.110
OVS Agent Password: Way2secure
Root Password: Way2secure

  • Once the VM has booted fully and is at the splash screen you can continue to the next step


* log into PROD OVM Manager
* discover the PROD OVM server
* choose Servers and VM's tab
* click on PROD OVM server
* select the "Bond Ports" perspective
* create a new bond with the following parameters:

Interface Name: bond1
Addressing: static
IP Address: 172.16.11.101
Mask: 255.255.255.0
MTU: 1500
Description: (optional)
Bonding: Load Balanced
Selected Ports: eth2 and eth3

  • choose Networking tab
  • select the Network labeled 10.1.11.0 and configure with the following parameters:

** Configuration Tab **

Name: Management
Description: (optional)
Network Uses: Check Management, Live Migrate and Cluster Heartbeat

** Ports Tab **

Port Name: bond0

** VLAN Interfaces **

None

== NETWORK CONFIGURATION ==

* Create a new network
* select "Create a Network with Ports/Bond Ports/VLAN Interfaces" radio button and click next
* Give it a name of "Storage", select the "Storage" checkbox then click next
* add bond1 from ovmprd1 and click ok
* click next - there will not be any VLAN interfaces so click Finish
* Create a new network
* select "Create a Network with Ports/Bond Ports/VLAN Interfaces" radio button and click next
* Give it a name of "Public", select the "Virtual Machine" checkbox then click next
* add eth1 from ovmprd1 and click ok
* click next - there will not be any VLAN interfaces so click Finish

== STORAGE CONFIGURATION ==

* Click on the "Storage" tab
* Discover SAN Server
* Assign name of "PROD-ZFS"
* Make sure Storage Type says "iSCSI Storage Server"
* Make sure Storage Plug-in says "Oracle Generic SCSI Plugin"
* Click next
* Add an Access Host with IP address of 172.16.11.100
* Click next
* Add ovmprd1 to Selected Servers then click next
* Edit the default access group
* On the storage initiators tab, add ovmprd1's iqn to Selected Storage Initiators then click ok
* Click Finish
* Highlight the PROD-ZFS SAN server and click Refresh SAN Server
* Verify that two physical disks are visible
* Rename the 12gb LUN to PROD-PoolFS
* Rename the 30gb LUN to PROD-REPO

== SERVER POOL CREATION ==

* Click on the Servers and VM's tab
* Create a new Server Pool called PROD
* Give it a VIP of 10.1.11.102
* Select Physical Disk radio button
* Select Storage Location and choose the PROD-PoolFS LUN
* Click next
* move ovmprd1 to Selected servers then click finish

== STORAGE REPOSITORY CREATION ==

* Click on the Repositories tab
* Create a new Repository called PROD-REPO
* Select the Physical Disk radio button under Repository Location
* Click on the magnifying glass and choose PROD-REPO then click next
* move ovmprd1 to Present to Servers then click finish

== STORAGE REPOSITORY REPLICATION ==

* Log into zfsprd1
* Click on Configuration, then services
* Edit the replication service
* Add a target

Name: ovmdr1
Hostname: 172.16.10.101
Root password: Way2secure

  • Click on shares
  • Edit the PROD-REPO Project
  • Click on the Replication sub group
  • Add an action

Target: ovmdr1
Pool: DR
Update Frequency: Scheduled
Add a Schedule for every half hour at 00 minutes after
Leave the rest of the settings at the default

  • Click Add
  • Hover over the Target near the STATUS column and click on the picture of the two circular
    arrows pointing to eachother. This will kick off a manual replication.
  • Monitor replication status until completely replicated (should take about 5 minutes)

==============================================

OVMDR1

  • create a VM based on Oracle Linux 64bit OS
  • rename the VM to ovmdr1
  • Give the VM 2gb memory and 2 cpu’s
  • Give the VM a 6gb hard drive
  • configure ovmdr1 with the following network adapters:
    adapter 1: (Host Only) DR Management
    adapter 2: (NAT Network) DRPublic
    adapter 3: (Host Only) DR Storage
    adapter 4: (Host Only) DR Storage
  • boot the VM and install OVM Server 3.3.2
  • configure the VM with the following settings:

Host Name: ovmdr1
IP Address (eth0): 10.1.12.101
IP Netmask: 255.255.255.0
Default Router: 10.1.12.1
DNS Server: 192.168.12.110
OVS Agent Password: Way2secure
Root Password: Way2secure

  • Once the VM has booted fully and is at the splash screen you can continue to the next step
  • Copy following configuration files from ovmprd1 to ovmdr1. Note that in your installation,
    the bridge name referenced below will be different

/etc/sysconfig/network-scripts/meta-eth1
/etc/sysconfig/network-scripts/ifcfg-{bridge} (example /etc/sysconfig/network-scripts/ifcfg-1080940192)

  • Edit /etc/sysconfig/network-scripts/ifcfg-{bridge} on ovmdr1 to make the
    MAC address match that of eth1 on ovmdr1 but leave the bridge number intact

  • log into DR OVM Manager

  • discover the DR OVM server
  • choose Servers and VM’s tab
  • click on DR OVM server
  • select the “Bond Ports” perspective
  • create a new bond with the following parameters:

Interface Name: bond1
Addressing: static
IP Address: 172.16.12.101
Mask: 255.255.255.0
MTU: 1500
Description: (optional)
Bonding: Load Balanced
Selected Ports: eth2 and eth3

  • choose Networking tab
  • select the Network labeled 10.1.12.0 and configure with the following parameters:

** Configuration Tab **
Name: Management
Description: (optional)
Network Uses: Check Management, Live Migrate and Cluster Heartbeat

** Ports Tab **
Port Name: bond0

** VLAN Interfaces **
None

== NETWORK CONFIGURATION ==
* Create a new network
* select “Create a Network with Ports/Bond Ports/VLAN Interfaces” radio button and click next
* Give it a name of “Storage”, select the “Storage” checkbox then click next
* add bond1 from ovmdr1 and click ok
* click next – there will not be any VLAN interfaces so click Finish
* Create a new network
* Observe that the Public network is already there. This is done by copying the meta file and the
bridge file from OVMPRD1

== STORAGE CONFIGURATION ==
* Click on the “Storage” tab
* Discover SAN Server
* Assign name of “DR-ZFS”
* Make sure Storage Type says “iSCSI Storage Server”
* Make sure Storage Plug-in says “Oracle Generic SCSI Plugin”
* Click next
* Add an Access Host with IP address of 172.16.12.100
* Click next
* Add ovmdr1 to Selected Servers then click next
* Edit the default access group
* On the storage initiators tab, add ovmdr1’s iqn to Selected Storage Initiators then click ok
* Click Finish
* Highlight the DR-ZFS SAN server and click Refresh SAN Server
* Verify that one physical disk is visible
* Rename the 12gb LUN to DR-PoolFS

== SERVER POOL CREATION ==
* Click on the Servers and VM’s tab
* Create a new Server Pool called DR
* Give it a VIP of 10.1.12.102
* Select Physical Disk radio button
* Select Storage Location and choose the DR-PoolFS LUN
* Click next
* move ovmdr1 to Selected servers then click finish

== TEMPLATE IMPORT ON PROD ==
* Download template to /var/tmp on ovmprd1 (should be .ova format to proceed- unzip if needed)
* Start Python web server on ovmprd1

python -m SimpleHTTPServer 8080

  • Navigate to the Repositories tab in ovmmprd1
  • Expand the PROD-REPO repository and highlight the Assemblies folder
  • Click on the import VM Assembly button
    VM Template URLs: http://10.1.11.101:8080/OVM_OL6U6_x86_64_PVM.ova

  • Click on assembly that was just imported

  • Create template from assembly
    Assembly Virtual Machines: {select the assembly you just imported}
    VM Template Name: t_OL6.6

  • Click ok

  • Edit the t_OL6.6 template
    Add public network
    change sizing to 1gb memory and 1vcpu

  • Clone the t_OL6.6 template to a virtual machine
    Clone Name: ol6.6
    Target Server Pool: PROD

  • Edit ol6.6.0 VM
    Change VM name to ol6.6
    If there is an extra VNIC, remove it

  • Start VM and connect to console

  • Configure VM with hostname and root password
  • Touch a file

touch /var/tmp/ovmprd1

 

Continued in part 4 of 5

What do a Subway sandwich and a computer have in common?

original raspberry-pi-zero-_3510862b

 

That’s right, you can get either one of them for $5!  Introducing the newest member to the Raspberry Pi family, the Pi Zero.  As you can see by the picture above next to the deck of cards, it’s quite a bit smaller than a foot long sandwich, but that doesn’t stop it from packing quite a punch!  The tiny new SOC (System On a Chip) computer is just that- a full fledged computer capable of running Linux with a desktop environment.  Granted, it’s not the snappiest performer in that capacity but still super cool that it can pull it off!

 

Here are the specs (gratuitously lifted from raspberry pi’s website):

  • A Broadcom BCM2835 application processor
    • 1GHz ARM11 core (40% faster than Raspberry Pi 1)
  • 512MB of LPDDR2 SDRAM
  • A micro-SD card slot
  • A mini-HDMI socket for 1080p60 video output
  • Micro-USB sockets for data and power
  • An unpopulated 40-pin GPIO header
    • Identical pinout to Model A+/B+/2B
  • An unpopulated composite video header

 

So- on to the gotcha’s.  With a $5 computer, yo’re gonna to have to make some minor investments in additional hardware to make it functional.  Here’s the list of absolute must have’s to even get up and going:

  • MicroSD Card (preferably 8gb or bigger and class 10 or faster)
  • Micro USB power source capable of providing at least 1A at 5v
  • Micro USB to USB Type A Female converter
  • USB Wi-Fi or ethernet adapter (make sure it’s supported first)

You can technically get up and running with this much hardware, however you have no video out and would have to rely on pre-configuring the OS to somehow get on the network and allow SSH to get in.  Not very functional but a working minimal config once you have it set up the way you want.  Raspberry Pi has taken the Spirit Airlines approach to the Zero.  They give you only what you need to work (the Bare Fare), allowing you to decide what extras you want to pay for and which ones you skip.  In order to configure your Zero initially, you’ll need a couple more things:

  • Mini HDMI to HDMI cable (or Mini HDMI to HDMI converter with an HDMI cable coming out of it)
  • USB Hub for mouse, keyboard and wired or wireless network connectivity
  • Keyboard and Mouse

This will get you connected to a Monitor/TV that has HDMI inputs so you can see what you’re doing.  It also provides for an input method via the keyboard and mouse.  At the end of this article, I’ll post a list of some of the essential hardware, how much I paid for it and where I got it.

 

t5WCokG

One of the reasons I bought one of these is it’s ability to serve as a very capable media center device.  In one of my earlier posts, I talked about something called OpenELEC.  It’s a Linux distribution that includes Kodi which is an open source home theater software package also based on Linux.  The OpenELEC package combines the Linux OS and Kodi into an interface that’s very well suited to a TV and remote control.  Best of all, it runs on the entire line of Raspbery Pi’s!  I’ll be posting soon about one of the other alternatives to OpenELEC called OSMC.  The concept is the same, however OSMC includes a full raspbian Linux OS that isn’t as hands off as OpenELEC is.  As a result, it’s much more easily configured and customized without having to learn all the in’s and out’s of the underlying OpenELEC OS components.

 

Architecture-and-Source

The reason that the Zero can pull this off is mainly due to it’s built in hardware video decoder.  The GPU (Graphics Processing Unit) has discreet hardware functions dedicated for video encoding and decoding (recording and playback).  This means that video playback such as 1080p at 60fps doesn’t rely on the processor to decode and display the video stream, slowing other operations down.  It’s all done in hardware- very similar to the Playstation 4, XBox ONE, or any other gaming platform that has dedicated graphics hardware.  All that’s left for the diminutive Zero to do is render the on-screen menus, take care of assorted housekeeping and perform other OS related stuff.

 

Pibow_Zero_1_of_3_1024x1024There are a number of “cases” for the Zero out on the market now.  I use the term case rather loosely because as you can see to the left, it’s mainly two layers of plastic sandwiching the Pi Zero between it.  There are also quite a few 3d designs that “makers” can download and print on their 3d printer. Others can be bought in brick and mortar stores like MicroCenter or ordered online from websites like Adafruit, Raspberry Pi’s swag store, or Pimoroni (a popular “maker” website based in the UK) to name just a few.  You don’t technically need a case, but it’s a good idea to keep shorts, static discharge or any other molestation from occurring to your sweet innocent little computer.  With the tiny form factor this device affords, you can easily slap a case on it, connect it to your living room TV and attach it to the back of your set via 2 way adhesive tape- nobody would even know it’s there!

 

As of this writing, I’m not aware of any MicroSD cards that are bigger than 512gb.  Granted that’s a LOT of storage but that comes at a fairly steep price- about $400 on amazon.com.  I’m sure as higher density chips come out that price will fall, but the better bet would be to cobble together some 4TB hard drives in a desktop computer and use it for network storage of your multimedia files.  This is what I’m doing and it works perfectly!  I have multiple Raspberry Pi’s throughout the house on each TV that can play back my entire collection of movies, pictures, music and any other multimedia I choose to host on my media server.

 

raspberry-pi-2-pinoutmaxresdefault

There are also a number of other things that any of the Raspberry Pi family is capable of doing, including interacting with the physical world via it’s GPIO pins.  The Zero doesn’t come out of the box with the 40pin header required to use the GPIO, however it’s easily soldered onto the board.  I have a couple Pi 2’s that have temperature sensors hooked up to them and I track the temperature via MRTG graphs.  I also hope to set up an animated Christmas light display using a SainSmart 16 Channel relay board that is controlled by the Pi turning on and off each individual circuit.  It could also be used for home automation in that regard.

 

I could go on and on about all the things that these little buggers can do, but this article is focused on the Zero.  Below I’ll list some of the hardware (with prices and source) that you’ll need in order to put the Zero into service.  Add it all up and you’ll have to purchase at least another $16+ worth of hardware to really get some use out of it.  Granted I went as cheap as I could find online and I didn’t factor in any shipping or tax so your total could very well be north of $20.  For that, you can almost get a Raspberry Pi B+ that has full size HDMI, built in Ethernet and 4 full size Type A USB ports as well.  But c’mon- look how small this thing is- you can hide it in a can of Altoids and have room to spare for cripes sakes!

 

4105_large

61xpLcWy1cL._SL1500_

31+9SnMkOAL

61IVHo1bOBL._SL1001_

61+xxjf-4BL._SL1400_

 

There are a number of remote control apps that allow your phone/tablet to serve as the remote control for Kodi.  What’s really cool is that Kodi also supports the CEC (Consumer Electronics Control) standard which allows you to control some devices via the HDMI protocol.  This means that in a lot of cases, you can simply use the remote that came with your TV to navigate through Kodi without any additional hardware needed!

 

 

Testing network throughput in Linux

I was at a customer site the other day doing a POC to compare performance between an ODA and an AIX system running Oracle Database.  The network didn’t seem to be very busy at all and I wanted to rule out throughput as a bottleneck for performance issues.  I wound up using nc (netcat) and dd (disk dump) to show the network throughput.  Here’s an example of what I did (on two different systems):

 

System 1:

[root@forge ~]# nc -vl 2222 >/dev/null

System 2:

[root@daryl ~]# dd if=/dev/zero bs=1024k count=256 | nc -v 10.10.155.10 2222
Connection to 10.10.155.10 2222 port [tcp/EtherNet/IP-1] succeeded!
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 22.7638 s, 11.8 MB/s

This tells me that one of the two systems is probably connected at 100Mb. Further investigation reveals that I was right:

System 1:

[root@forge ~]# ethtool eth0
Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        MDI-X: Unknown
        Supports Wake-on: uag
        Wake-on: d
        Link detected: yes

System 2:

[root@daryl ~]# ethtool eth0
Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Half 1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Half 1000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Link partner advertised link modes:  10baseT/Half 10baseT/Full
                                             100baseT/Half 100baseT/Full
        Link partner advertised pause frame use: Symmetric
        Link partner advertised auto-negotiation: Yes
        Speed: 100Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: off
        Supports Wake-on: g
        Wake-on: g
        Current message level: 0x000000ff (255)
                               drv probe link timer ifdown ifup rx_err tx_err
        Link detected: yes

If you look at the output above, you’ll see that the line that starts with “Speed:” shows the currently connected link speed. Sure enough, daryl is stuck at 100Mb so we get the slower speed.