I’ve deployed a number of these appliances over the last couple years both virtualized and bare metal. When people realize that Oracle Linux is running under the hood they sometimes think it’s ok to throw rpmforge up in there and have at it. What’s worse is a customer actually tried to do a yum update on the OS itself from the Oracle public YUM repo! Ack….
I guess I can see wanting to stay patched to the latest available kernel or version of tools, but it needs to be understood that this appliance is a closed ecosystem. The beauty of patching the ODA is the fact that I don’t have to chase down all the firmware updates for HDD/SSD/NVM disks, ILOM, BIOS, etc… That legwork has already been done for me. Plus the fact that all the patches are tested as a unit together on each platform makes me able to sleep better at night. Sure- the patches take about 4-5 hours all said and done, but when you’re done, you’re done! I’m actually wondering if Oracle will eventually implement busybox or something like it for the command line interface to hide the OS layer from end users. With their move to a web interface for provisioning of the ODA X6-2S/M/L it seems they’ve taken a step in that direction.
If you decide to add repositories to your ODA in order to install system utilities like sysstat and such, it’s generally ok, but I need to say this: the Oracle hard line states that no additional software should be installed on the ODA at all. In support of that statement, I will say that I’ve had problems patching when the Oracle public YUM repo is configured and I also ran into the expired RHN key error that started rearing its ugly head at the beginning of 2017. Both of these are easily fixed, but why put yourself in that position in the first place?
Also, in closing I’d like to recommend to all my customers/readers that you make it a priority to patch your ODA at least once a year. There are actual ramifications to being out of date that have bitten folks. I can think of one case where the customers’ ODA hadn’t been updated in 3-4 years. The customer experienced multiple Hard Drive failures within a week or two and because they had their ODA loaded to the kilt, the ASM rebuild was impacting performance dramatically. The reason the drives failed so close to eachother and more importantly the way they failed was because of outdated disk firmware. Newer firmware was available that changed the way disk failure was performed in that it was more sensitive to “blips” and failed out the disk instead of letting it continue to stay in service. As a result, the disk was dying for awhile and causing degraded performance. Another reason the disks probably failed early-ish is the amount of load they were placing on the system. Anywho… just remember to patch ok?