Monday, September 21, 2009

NetApp NFS agent for VCS - Part 2

In last post I have told why I need this agent installed and what all are the features of this, in this post I will write how I have implemented this agent on 4 node RHEL 5.2 VCS cluster in our test environment as this post is centred on NetApp NFS agent for VCS configuration therefore I will not talk about how to install and configure VCS on RHEL.

First I have created a NFS volume on our filer testfiler1, and exported it giving rw access to all 4 nodes of cluster (lablincl1n1, lablincl1n2, lablincl1n3, lablincl1n4) to keep it simple I used sec=sys rather than Kerberos or anything else. Next step was to download the agent from NOW site and install on all the cluster nodes, it was pretty straight forward and well documented in admin guide so no hurdles and went well.

Once agent installation and volume creation was done I started to configure NFS share in agent through GUI.

Updated FilerName as testfiler1 (which is the name of NetApp filer exporting NFS share), MountPoint attribute with local mount point name, MountOptions with Oracle specific mount options, FilerPathName with nfs volume name, NodeNICs with lablincl1n1, lablincl1n2, lablincl1n3, lablincl1n4 (name of all the nodes of cluster) and updated ClearNFSLocks to 2, UseSSH to 1,

I left rest all of the options untouched as they were good with their default values, like FilerPingTimeout=240, RebootOption=Empty, HostingFilerName=Empty, RebootOption=empty, RouteViaAddress=empty, along with MultiNIC and /etc/hosts file because NIC teaming was done at OS level and felt lazy to update lots of IP addresses in hosts file, as a matter of fact I knew that our BIND servers are robust enough.

Note:
Please don’t get confuse looking at HostingFilerName field as you need it only if you are using vfiler. If you are exporting NFS volume from vfiler then put vfiler name in FilerName field and physical filer name (on which vfiler is created) as HostingFilerName.

Now next step was configuring SSH which was pretty easy, just use “ssh-keygen -t dsa” command to generate public and private key of root from all your nodes and copy their public key “authorized_keys” file in folder /etc/sshd/root/.ssh of your filer.

Now configuration was completed and everything was working as expected just within 4 hrs of my effort.

At this point everything was completed except one very important thing i.e. security, as following agent’s admin guide I have added dsa keys in root’s authorized_keys file, therefore anyone having root access on any of 4 nodes of cluster will have root access on my filer also which I wasn’t comfortable at. So I started looking around in agent’s attributes to configure different account name used by agent but to my surprise nothing was there even none of the documents were speaking on that so I started going on my own way to solve it and it worked well after some extra effort.

Now as this post is going quite big so I will cover configuring different user name in VCS agent in next post.

No comments: