I’ve been working with a customer the last few days to help them set up their ZS3-2. They have dual controllers and two storage trays with one storage pool. They are in an active/passive configuration with 2 dual port 10GbE fiber adapters in each controller for a total of four active paths at one time. They plan on running Oracle database 12c and storing the database files on the ZS3-2. Additionally they will be using Oracle’s DirectNFS.
Knowing a little about how DirectNFS works will help here- it basically acts as a regular NFS client but runs inside the Oracle kernel. Unlike the standard NFS client, dNFS does direct calls for only what it needs. It supports concurrent direct I/O which bypasses the buffer cache and the OS entirely taking a lot of overhead out of the picture. The biggest advantage it carries is the ability to spread network load between multiple NIC’s without the need to set up load balancing at layer 2 or 3 in the OS.
Knowing that dNFS can automatically load balance among multiple NIC’s, we can use IPMP on the ZS3-2 to take full advantage of it. Basically this is what the network config looks like on the ZFS (click to enlarge):
With this configuration, traffic is spread across the 4 10GbE adapters on the active controller. In the event of a path failure, IPMP continues to pass traffic across the remaining paths, as does dNFS on the client side. If the ZFS controller fails, all resources (including IP addresses) fail over to the second controller and traffic resumes from that point.
I’ve also been meaning to play around with vNIC’s on the ZS3-2 but haven’t had a chance just yet. It allows you to stack multiple virtual NIC’s on top of a physical one to gain more use out of each NIC. One benefit to this is related to management ports. In the past, you had to dedicate at least 2 NIC’s on each controller as management. Both ports on each controller had to be cabled, but only port 0 on controller 1 and port 1 on controller 2 would be used at the same time. The other ports have to remain unused to allow for resources to fail over to the remaining node in the event of a hardware issue. Now I can just put 2 vNIC’s on top of the first onboard NIC and allow the other three to be used for data traffic. For some customers, these onboard ports are the only data ports they have so each one counts!