Recently again we found ourselves going through space crunch but this time it was on our DMX systems spinning 15k FC disks so we started looking around in space allocation and soon found a lots of low IOPS Oracle databases using these space and after adding their space allocation them came as 460TB in total.
WOW wasn’t that enough space to give us few more months to place new orders; Oh yes. So we decided to move them on NetApp boxes which are using 7.2k 1TB SATA disk storage but not on FC or iSCSI instead over NFS as I knew NetApp provides a VCS agent to work with their NFS export and gives some cool features. Though I have never used them but was confident enough that it will work, so I started implementing it in our test environment.
Here’s the detail of its features
The NetApp NFS client agent for VCS on Red Hat Linux/Solaris/SUSE Linux monitors the mount points on NetApp storage systems. In this environment, the clustered nodes or single node uses the NFS protocol to access the shared volume on NetApp storage systems and agent carries out the commands from VCS to bring resources on line, monitor their status, and take them off-line as needed.
Key Features for version 5.0 of agent are given below.
- Supports VCS 4.1 and 5.0*
- Supports Exportfs persistency
- Supports IPMultiNIC and MultiNICA
- Supports Data ONTAP 7.1.x or later
- Supports fine granularity NFS lock clearing (requires Data ONTAP 7.1.1 or later)
- Supports communication with the storage system through SSH, in addition to RSH
- Multithreading (NumThreads >1) is supported (requires IPMultiNIC with MultiNICA)
- Supports automatic fencing of export for ro access to other nodes in cluster as resource moves from one node to other
- Supports failover of a single resource group when multiple resource groups of the same type are active on the same cluster node
Linux Kernel 2.6.9-34.EL, 2.6.9-34.ELsmp for RHEL, 2.6.5-7.287.3-smp for SUSE
* VCS 4.1 is not supported for SUSE Linux
# With Solaris 10 local zones are also supported in addition to global zones.
In next part I will post how to implement it, which will need some modification to script also.