Monday, December 7, 2009

Restoring data from snapshot through snaprestore in NetApp

Ok so now you have allocated correct snap reserve space, configured snap schedules, snap autodelete, users have access to their snapshots and they recover their data without any interference of backup team. Everyone is happy so you happy but all of sudden on a Friday evening get a call from VP marketing crying on phone that he lost all his data from his network drive and windows shows recovery time of 2 hrs but he wants his 1Gb pst to be accessible now as he is on VPN with a client and needs to pull some old mails from his pst. Well that’s nothing abnormal as he was having lots of data and to recover the data windows has to read all the data from snapshot and then write back on network drive which but obvious will take time. Now what would you say, will you tell him to navigate to his pst and recover it (which shouldn’t take much time on fast connection) then try to recover all the data or ok I have recovered all your data while talking on the phone and become hero.

Well I must say I would like to use the opportunity to become hero with a minute or less of work, but before we do a few things to note.

For volume snaprestore:

  • The volume must be online and must not be a mirror.
  • When reverting the root volume, filer will be rebooted.
  • Non-root volumes do not require a reboot however when reverting a non-root volume, all ongoing access to the volume must be terminated, just as is done when a volume is brought offline.

For single-file snaprestore:

  • The volume used for restoring the file must be online and must not be a mirror.
  • If restore_as_path is specified, the path must be a full path to a filename, and must be in the same volume as the volume used for the restore.
  • Files other than normal files and LUNs are not restored. This includes directories (and their contents), and files with NT streams.
  • If there is not enough space in the volume, the single file snap restore will not start.
  • If the file already exists (in the active file system), it will be overwritten with the version in the snapshot.

To restore data there are two ways, first system admins using “snap restore” command invoked by SMO, SMVI, Filer view or system console and second by end users where they can restore by copying file from .snapshot or ~snapshot directory or by using revert function in XP or newer system. However restoring data through snap restore command is very quick (seconds) even for TBs of data. Syntax for snap restore is as below.

“snap restore -t vol -s <snapshot_name> -r <restore-as-path> <volume_name>”

If you don’t want to restore the data at different place then remove the “-r <restore-as-path>” argument and filer will replace current file with the version in snapshot and if you don’t provide a snapshot name in syntax then system will show you all available snapshots and will prompt to select snapshot from which you want to restore the data.

Here’s the simplest form of this command as example to recover a file.

testfiler> snap restore -t file /vol/testvol/RootQtree/test.pst

WARNING! This will restore a file from a snapshot into the active filesystem. If the file already exists in the active filesystem, it will be overwritten with the contents from the snapshot.

Are you sure you want to do this? yes

The following snapshots are available for volume testvol:

date            name
------------    ---------
Nov 17 13:00    hourly.0
Nov 17 11:00    hourly.1
Nov 17 09:00    hourly.2
Nov 17 00:00    weekly.0
Nov 16 21:00    hourly.3
Nov 16 19:00    hourly.4
Nov 16 17:00    hourly.5
Nov 16 15:00    hourly.6
Nov 16 00:00    nightly.0
Nov 15 00:00    nightly.1
Nov 14 00:00    nightly.2
Nov 13 00:00    nightly.3
Nov 12 00:00    nightly.4
Nov 11 00:00    nightly.5
Nov 10 00:00    weekly.1
Nov 09 00:00    nightly.6
Nov 03 00:00    weekly.2
Oct 27 00:00    weekly.3

Which snapshot in volume testvol would you like to revert the file from? nightly.5

You have selected file /vol/testvol/RootQtree/test.pst, snapshot nightly.5

Proceed with restore? yes
testfiler>

Sunday, December 6, 2009

Snapshot configuration in NetApp

Ok first of all let me admit that my last post sounded more as a sales pitch rather than something technical though I am not a NetApp employee or paid by anyone to do blogging. However I must agree that whatever I have tried to show there was petty much similar available from other vendors so it was more about general awareness of  technology rather than a particular vendor but in this post I will talk about Snapshot configuration and other functions in NetApp, so let’s start.

What is snapshot copy?

A Snapshot copy is a frozen, read-only image of a volume or an aggregate that captures the state of the file system at a point in time and each volume can hold maximum 255 Snapshot copies at one time.

Snapshots can be taken either by system at pre-defined schedule, Protection Manager Policies, SMO, SMVI, Filer view or manually running command at system console or through custom scripts.

How to disable client access to snapshot copy?

To disable client access of .snapshot volume you can give “vol options <volume_name> nosnapdir on” command.

Notes:

  • Please DO NOT use any snap family of command without volume name as it may drive CPU processor to its peak for systems having lots of volume with a number of snapshots and it can hung the system which may result in system panic situation.
  • Use “-A” if you want to run these command against any aggregate and replace volume name with aggregate name.

How to Configure Snapshots through system console?

It’s always recommended that when you provision a volume you should look at snapshot reserve and schedule as by default when a volume is created 20% of space is reserved for snapshots which most of the time you need to change for efficient usage of space and snapshots. Always ask requester what is the rate of change, how much snapshots he wants to have access to and when he wants to snapshots to be taken because if you take snapshot of some oracle data and database is not in hot-backup mode then it’s just utter waster and same goes for VM.

So once you have those details do a little calculation and then use these command to configure.

  1. ‘snap reserve <volume name> <snapshot reserve size in % volume size, gb, mb or kb>’
    Example:
    ‘snap reserve testvol 10’
    This command will allocate 10% of space for snapshots on volume “testvol”
  2. ‘snap sched <volume name> <week days hour@list>
    Example:
    ‘snap sched testvol 4 7 7@9,11,13,15,17,19,21’

    This command will define the automatic snapshot schedule, and here you specify how much weekly, daily or hourly snapshot you want to retain as well at what time hourly snapshot will be taken. In given example volume testvol is having 4 weekly, 7 daily and 7 hourly available where hourly snapshots are taken at 9,11,13,15,17,19 and 21 hours of system local time. Please make sure that ‘nosnap’ is set to off in volume options.

How to take snapshots manually?

To take the snapshot manually you can run below command.

“snap create <volume name> <snapshot name>”

Here volume name is the name of volume you want to take snapshot of and snapshot name is the name you want to identify snapshot with.

How to list snapshots?

You can check the status of snapshots associated with any volume with command

“snap list <volume name>”

After issuing the above command you will get similar output

testfiler> snap list testvol

Volume testvol
working...

%/used %/total date name
---------- ---------- ------------ --------
36% (36%) 0% ( 0%) Dec 02 16:00 hourly.0
50% (30%) 0% ( 0%) Dec 02 12:00 hourly.1
61% (36%) 0% ( 0%) Dec 02 08:00 hourly.2
62% ( 5%) 0% ( 0%) Dec 02 00:01 nightly.0
69% (36%) 0% ( 0%) Dec 01 20:00 hourly.3
73% (36%) 0% ( 0%) Dec 01 16:00 hourly.4
77% (36%) 0% ( 0%) Dec 01 00:01 nightly.1

What if you are running low on snap reserve?

Sometimes due to excessive rate of change in data, very soon snapshot reserve gets full and they over spill on data area of volume, to remediate this you have to either extend volume or delete old snapshots.

To resize the volume use “vol size” command and to delete the old snapshots you can use “snap delete” command which I will cover in next section, however before deleting if you want to check how much free space you can gain from this snapshot use below command

“snap reclaimable <volume name> <snapshot name> | <snapshot name>…”

Running above command will give you output as below and you can add multiple snapshot names after one other if you are not getting required free space by deleting one snapshot. Please note that you should select snapshots for deletion only from oldest to latest order otherwise blocks freed by deleting any middle snapshot will still be locked in its following snapshot

testfiler> snap reclaimable testvol nightly.1 hourly.4
Processing (Press Ctrl-C to exit) ............
snap reclaimable: Approximately 9572 Kbytes would be freed.

How to delete snapshot?

To delete the snapshot use command snap delete with volume name and snap name in below fashion

“snap delete <volume name> <snapshot name>”

Running this command will print similar information on screen

testvol> snap delete testvol hourly.5

Wed Dec 2 16:58:29 GMT [testfiler: wafl.snap.delete:info]: Snapshot copy hourly.5 on volume testvol NetApp was deleted by the Data ONTAP function snapcmd_delete. The unique ID for this Snapshot copy is (67, 3876).

How to know what is the actual rate of change?

Sometime on a particular volume very often you will be running out of snap reserve space as snapshots fill them up much before old snaps gets expire and deleted by auto delete function (if you have configured) and you must be interested to resize the snap reserve accurately to avoid any issues. So in order to check how much is the actual rate of change KB per/hour calculated from all the snapshots or between two snap on given volume you can use snap delta command.

“snap delta <volume name> [<1st snapshot name> <2nd snapshot name>]”

testfiler> snap delta testvol

Volume testvol
working...

From Snapshot   To                   KB changed  Time         Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
hourly.0        Active File System   30044          0d 00:28  63176.635
hourly.1        hourly.0             552            0d 02:00  276.000
hourly.2        hourly.1             552            0d 01:59  276.115
weekly.0        hourly.2             628            0d 09:00  69.680
hourly.3        weekly.0             468            0d 03:00  155.956
hourly.4        hourly.3             552            0d 01:59  276.115
hourly.5        hourly.4             500            0d 02:00  249.895
hourly.6        hourly.5             548            0d 01:59  274.038
nightly.0       hourly.6             560            0d 14:59  37.334
nightly.1       nightly.0            700            0d 23:59  29.171
nightly.2       nightly.1            5392           1d 00:00  224.666
nightly.3       nightly.2            820            0d 23:59  34.172
nightly.4       nightly.3            2920           0d 23:59  121.687
nightly.5       nightly.4            880            1d 00:00  36.666
weekly.1        nightly.5            1111956        1d 00:00  46307.381
nightly.6       weekly.1             632            1d 00:00  26.333
weekly.2        nightly.6            42420          6d 00:00  294.583
weekly.3        weekly.2             8892           7d 00:00  52.928

Summary...

From Snapshot   To                   KB changed  Time         Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
weekly.3        Active File System   1209016       21d 13:29  2336.320

 

That was all about configuring creating and deleting snapshots but what it’s good if you don’t know how to restore the data from snapshots for which you have done so much things. So, in next post I will address how to restore data from snapshot through snap restore command

Friday, December 4, 2009

Snapshots in NetApp

Volumes and data:

Volume used for test was a flexible volume named ‘buffer_aggr12’ and “My Documents” folder from my laptop for data and sync tools from Microsoft to sync ‘My Documents’ folder with cifs share created from volume buffer_aggr12.

Snapshot configuration:

Scheduled snapshot were configured at 9,11,13,15,17,19,21 hours and retention period was 4 weekly, 7 daily and 7 hourly with 20% space reserve for snapshot.

The coolest part of the snapshot is flexibility, because as an administrator once you have configured it no more you have to look into this as it takes snapshot at defined schedule and if you have configured ‘snap autodelete’ then it will purge expired snapshots also as per your retention period. So effectively you don’t have to ever worry about managing hundreds of old snapshots lying in volume and eating up space (except when change rate of data overshoots and snapshots starts spilling on data area). As a end user you experience backups at your click away because snapshots integrates well with shadow copy services of windows 2000, XP or Vista and you can recover them whenever you need.

Here’s the configuration of snapshot for my test volume ‘buffer_aggr12’

AMSNAS02> snap sched buffer_aggr12
Volume buffer_aggr12: 4 7 7@9,11,13,15,17,19,21

AMSNAS02> snap reserve buffer_aggr12
Volume buffer_aggr12: current snapshot reserve is 20% or 157286400 k-bytes.

As I was running this test for months so there were enough snaps for me to play with and you can see below that these snapshots are going way back to 20th July,  which is 4 week old snapshot and anytime I can recover that from just a right click.

How to recover files or folders from snapshot:

There are two ways to recover the data from snapshots.

As an end user you can recover your data from windows explorer by just right clicking in an empty space while you are in the share in which you lost your data. Here’s an example of this.

a) This is the snapshot of my share folder, in this as you can see my pst file is corrupted and showing 0 kb.
image

b) To recover this, right click on any empty area and go to properties>previous version it shows me all the snapshots taken for this folder, as shown in below screenshot.

image image

c) Now at this point either I can revert the whole folder to previous state or just copy it to another location to recover a deleted file but at this place my point is to revert a corrupted file rather than recovering a deleted file. So I will just do a right click on that file and navigate to previous versions tab in properties dialogue box. Here in this it shows me the changes captured by snapshot at different times, so I can just select the date I want to revert with and click on restore.
image

d) Now it starts replacing the corrupted file with the one taken by snapshot. Its taking a long time because the file in question is >1Gb size and I am on WAN link so it’s slow but there is another way to do it and that’s recovering directly from filer console which recovers in seconds but unfortunately not available to end user.
clip_image009

e) Now here’ the screenshot of my before and after.

image image

As an Administrator you can recover a file, folder or whole volume within second as while doing it from filer console, system doesn’t have to copy the old file from snapshot to temp location, delete old file and then change the recovered file’s metadata , instead it just changes the block pointers internally so it’s blazing fast . Here’s an example of this.

a) In this test again I will use same pst file which is corrupted but this time we will recover it from console. So first login to filer and do a snap list to see what all snapshots are available.
AMSNAS02> snap restore buffer_aggr12
Volume buffer_aggr12
working…
  %/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  Aug 14 17:00  hourly.0
  0% ( 0%)    0% ( 0%)  Aug 14 15:00  hourly.1
40% (40%)    0% ( 0%)  Aug 14 13:00  hourly.2
40% ( 0%)    0% ( 0%)  Aug 14 11:00  hourly.3
40% ( 0%)    0% ( 0%)  Aug 14 09:00  hourly.4
40% ( 0%)    0% ( 0%)  Aug 14 00:00  nightly.0
40% ( 0%)    0% ( 0%)  Aug 13 21:00  hourly.5
40% ( 0%)    0% ( 0%)  Aug 13 19:00  hourly.6
40% ( 0%)    0% ( 0%)  Aug 13 00:00  nightly.1
41% ( 0%)    0% ( 0%)  Aug 12 00:00  nightly.2
57% (39%)    0% ( 0%)  Aug 11 00:00  nightly.3
57% ( 0%)    0% ( 0%)  Aug 10 00:00  weekly.0
57% ( 0%)    0% ( 0%)  Aug 09 00:00  nightly.4
57% ( 0%)    0% ( 0%)  Aug 08 00:00  nightly.5
57% ( 0%)    0% ( 0%)  Aug 07 00:00  nightly.6
57% ( 0%)    0% ( 0%)  Aug 03 00:00  weekly.1
65% (35%)    0% ( 0%)  Jul 27 00:00  weekly.2
65% ( 0%)    0% ( 0%)  Jul 20 00:00  weekly.3

b) Now to recover the file you give below command and it recovers that in just a second.
AMSNAS02> snap restore -t file -s nightly.5 /vol/buffer_aggr12/RootQtree/test.pst

WARNING! This will restore a file from a snapshot into the active filesystem.  If the file already exists in the active filesystem, it will be overwritten with the contents from the snapshot.

Are you sure you want to do this? yes

You have selected file /vol/buffer_aggr12/RootQtree/test.pst, snapshot nightly.5

Proceed with restore? yes

AMSNAS02>

c) Here’s the screenshot of my folder which confirm file back in previous state.
image

Now as you see it was quite easy to use and very useful also, but to have a snapshot you need some extra space reserved in volume specially if your data is changing very frequently as more changes means more space you need to store changed block and the condition goes more complicated if you are trying to take snapshot of a VM, Exchange or Database volume, because before the snapshot is taken application has to put itself in hot-backup mode so a consistent copy can be made. Most of the applications have this functionality available but you have to use some script or snapmanager so when application is prepared it can inform filer to take snapshot and once snapshot is taken filer can inform back the application to resume its normal activity.

Saturday, November 7, 2009

Restrict snapmirror access by host and volume on NetApp

Recently one of my fellow NetApp admin friend asked me a very general question,

“How do you restrict your data to be copied through snapmirror?”

As like any other normal NetApp guy my answer was also same old vanilla type.

“Go to snapmirror.allow file and put the host name if your have set snapmirror.access to legacy or you can directly put hostname in host=host1,host2 format in snapmirror.access option.”

But he wanted more granular level of permission, so my another answer was,

“You can also use snapmirror.checkip.enable so any system reporting same hostname will not be able to access data.”

But even on that he wasn’t happy and was asking if there is any other way so he can restrict snapmirror access on volume basis. At this point I said “No, NetApp doesn’t provide this level of granular access.”

So the topic stopped there, but this question was there in my mind and always hunted me why there isn’t any such way.

Fast forward Past week when I had some extra time in my hand I started searching on net for this and fortunate enough I got a way on NOW site to get this work.

It was recorded under Bugs section with Bug ID # 80611 Which reads as.

“There is an unsupported undocumented feature of the /etc/snapmirror.allow file, such that if it is filled as follows:
    hostA:vol1
    hostA:vol29
    hostB:/vol/vol0/q42
    hostC
and "options snapmirror.access legacy" is issued, then the desired access policy will be implemented. Again note that this is unsupported and undocumented so use at your own risk.”

Yes, though NetApp says that there is a way to do that but they also say well sometimes it may break other functionality or may not work as expected.

Finding this I sent the details to my friend but unfortunately he don’t want to give it a try on his production systems and test systems are not available with him.

So if anyone of you want to try it or have tried it before please put your experience in comments field.

Tuesday, September 22, 2009

NetApp NFS agent for VCS - Part 3

In first post I wrote why I need this agent installed and what all are the features of this, and in last post I mentioned how to configure it on cluster node but that was incomplete because the post was going very big and I had to stop it, so here’s the remaining and very important part of that configuration.

How to configure different account name in NetApp NFS agent for VCS?

Hunting around in agent’s configuration guide from Veritas and NetApp didn’t reveal any result and even their KB search was not helpful. So I was left to choose my way and explore the stuff which I started with creating a new customized account on filer only for this purpose.

Here are the actual commands I used to create them starting from customized role to account.

‘useradmin role add exportfs -c "To manage NFS exports from CLI" -a cli-exportfs*,cli-lock*,cli-priv*,cli-sm_mon*’
‘useradmin group add cli-exportfs-group -r exportfs -c "Group to manage NFS exportfs from CLI"’
‘useradmin user add vcsagent -g cli-exportfs-group -c "To manage NFS exports from NetApp VCS Agent"’

And here’s the account after creation

testfiler1> useradmin user list vcsagent
Name: vcsagent
Info: To manage NFS exports from NetApp VCS Agent
Rid: 131090
Groups: cli-exportfs-group
Full Name:
Allowed Capabilities: cli-exportfs*,cli-lock*,cli-priv*,cli-sm_mon*
Password min/max age in days: 0/4294967295
Status: enabled

Now next thing was to give limited access to cluster node using vcsagent user and revoke its root access which was nothing more then removing dsa keys from /etc/sshd/root/.ssh/authorized_keys file and adding in /etc/sshd/vcsagent/.ssh/authorized_keys file.

After completing that I headed back to host and created a new file named config in .ssh directory of root with below content

Host testfiler1
User vcsagent
port 22
hostName testfiler1.lab.com

As a test I issued command “ssh testfiler1 version” on node terminal and I got access denied error which was perfectly fine because now when I do ‘ssh testfiler1’ system looks into config file in .ssh directory and uses vcsagent user which is not having access to run version command. Everything was looking good so I started running tests by moving resource from one node to another but to my surprise they were failing to make changes on filer and looking at filer audit logs it shown that they are still using root for ssh to filer.

Till the moment I didn’t run test I was thinking that agent is just relying to OS for ssh username as NetApp hasn’t set any username attribute in agent moreover as I haven’t configured in OS which account to use that’s why when agent executes command ‘ssh testfiler1 ’ OS directs the ssh connection to connect with root (cluster node’s local logged-in user).

But after going through my failed test it made me to believe that username is hardcoded in agent script so I started looking in script and soon found below line in file NetApp_VCS.pm

$cmd = "$main::ssh -n root\@$host '$remote_cmd'";’

After having this finding it was not a big brainer work to figure out what was going wrong and what I have to do. Just removed the word ‘root’ from script and it started working because now it is using config file from .ssh directory and uses vcsagent as username, alternatively I could have replaced word root with vcsagent directly in script also to make it simple and stay away from maintaining config file but I felt this to be much better.

Unfortunately till today there is no alternative apart from making changes in script as NetApp and Veritas both were not able to help us apart from a statement “we will raise a product enhancement request”.


Update: You need to give access "security-priv-advanced" also to user, so role should look like below.

testfiler01> useradmin role list exportfs

Name: exportfs

Info: To manage NFS exports from CLI

Allowed Capabilities: cli-exportfs*,cli-lock*,cli-priv*,cli-sm_mon*,security-priv-advanced


Monday, September 21, 2009

NetApp NFS agent for VCS - Part 2

In last post I have told why I need this agent installed and what all are the features of this, in this post I will write how I have implemented this agent on 4 node RHEL 5.2 VCS cluster in our test environment as this post is centred on NetApp NFS agent for VCS configuration therefore I will not talk about how to install and configure VCS on RHEL.

First I have created a NFS volume on our filer testfiler1, and exported it giving rw access to all 4 nodes of cluster (lablincl1n1, lablincl1n2, lablincl1n3, lablincl1n4) to keep it simple I used sec=sys rather than Kerberos or anything else. Next step was to download the agent from NOW site and install on all the cluster nodes, it was pretty straight forward and well documented in admin guide so no hurdles and went well.

Once agent installation and volume creation was done I started to configure NFS share in agent through GUI.

Updated FilerName as testfiler1 (which is the name of NetApp filer exporting NFS share), MountPoint attribute with local mount point name, MountOptions with Oracle specific mount options, FilerPathName with nfs volume name, NodeNICs with lablincl1n1, lablincl1n2, lablincl1n3, lablincl1n4 (name of all the nodes of cluster) and updated ClearNFSLocks to 2, UseSSH to 1,

I left rest all of the options untouched as they were good with their default values, like FilerPingTimeout=240, RebootOption=Empty, HostingFilerName=Empty, RebootOption=empty, RouteViaAddress=empty, along with MultiNIC and /etc/hosts file because NIC teaming was done at OS level and felt lazy to update lots of IP addresses in hosts file, as a matter of fact I knew that our BIND servers are robust enough.

Note:
Please don’t get confuse looking at HostingFilerName field as you need it only if you are using vfiler. If you are exporting NFS volume from vfiler then put vfiler name in FilerName field and physical filer name (on which vfiler is created) as HostingFilerName.

Now next step was configuring SSH which was pretty easy, just use “ssh-keygen -t dsa” command to generate public and private key of root from all your nodes and copy their public key “authorized_keys” file in folder /etc/sshd/root/.ssh of your filer.

Now configuration was completed and everything was working as expected just within 4 hrs of my effort.

At this point everything was completed except one very important thing i.e. security, as following agent’s admin guide I have added dsa keys in root’s authorized_keys file, therefore anyone having root access on any of 4 nodes of cluster will have root access on my filer also which I wasn’t comfortable at. So I started looking around in agent’s attributes to configure different account name used by agent but to my surprise nothing was there even none of the documents were speaking on that so I started going on my own way to solve it and it worked well after some extra effort.

Now as this post is going quite big so I will cover configuring different user name in VCS agent in next post.

Thursday, September 17, 2009

NetApp NFS agent for VCS

Recently again we found ourselves going through space crunch but this time it was on our DMX systems spinning 15k FC disks so we started looking around in space allocation and soon found a lots of low IOPS Oracle databases using these space and after adding their space allocation them came as 460TB in total.

WOW wasn’t that enough space to give us few more months to place new orders; Oh yes. So we decided to move them on NetApp boxes which are using 7.2k 1TB SATA disk storage but not on FC or iSCSI instead over NFS as I knew NetApp provides a VCS agent to work with their NFS export and gives some cool features. Though I have never used them but was confident enough that it will work, so I started implementing it in our test environment.

Here’s the detail of its features

The NetApp NFS client agent for VCS on Red Hat Linux/Solaris/SUSE Linux monitors the mount points on NetApp storage systems. In this environment, the clustered nodes or single node uses the NFS protocol to access the shared volume on NetApp storage systems and agent carries out the commands from VCS to bring resources on line, monitor their status, and take them off-line as needed.

Key Features for version 5.0 of agent are given below.
  • Supports VCS 4.1 and 5.0*
  • Supports Exportfs persistency
  • Supports IPMultiNIC and MultiNICA
  • Supports Data ONTAP 7.1.x or later
  • Supports fine granularity NFS lock clearing (requires Data ONTAP 7.1.1 or later)
  • Supports communication with the storage system through SSH, in addition to RSH
  • Multithreading (NumThreads >1) is supported (requires IPMultiNIC with MultiNICA)
  • Supports automatic fencing of export for ro access to other nodes in cluster as resource moves from one node to other
  • Supports failover of a single resource group when multiple resource groups of the same type are active on the same cluster node

Kernel Requirement
Linux Kernel 2.6.9-34.EL, 2.6.9-34.ELsmp for RHEL, 2.6.5-7.287.3-smp for SUSE

* VCS 4.1 is not supported for SUSE Linux
# With Solaris 10 local zones are also supported in addition to global zones.


In next part I will post how to implement it, which will need some modification to script also.


References:

Saturday, September 12, 2009

SSH broken if you disable Telnet in ontap 7.3.1

And here’s another bug which we hit a last month.

Last month when I was doing setup of our new filers I disabled telnet on the systems with along-with lots of other tweaking but later on when I tried to connect the system with SSH it refused. Thinking about that I might have turned off some other deep registry feature I went through entire registry but couldn’t find anything suspicious.

So I turned on SSH verbose login, tried to re-run SSH setup with different passkey sizes and what not, but no joy. Finally I tried with enabling telnet and voila it worked. By the time it worked it was around 7 pm so I called a day and left office scratching my head.

Next morning again I started looking around if there was something obvious I am missing but no, I couldn’t find anything even on NOW site, so I opened a case with NetApp and even NetApp guy was not able to understand why system is behaving like this, but finally in late evening that NetApp chap came to me with a BURT # 344484 which was fixed in 7.3.1.1P2.

Now there was a big problem as I wasn’t quite ready to upgrade my systems with a patched version so decided to let have telnet enable and wait for 7.3.2 to arrive. But since that time I was getting bugged with IT-security team because I was trying to get these systems connected in network so I can start allocating some space and get rid of space low warning but these guys were not allowing me because telnet was enabled on them. Finally past week when I noticed 7.3.2RC1 and 8.0RC1 availability on now site I got some sigh of relief as I believe now 7.3.2 GA should be available within a month and finally I can have my systems meeting my organization security policy more importantly I can get rid of pending space allocation request.

Friday, August 28, 2009

NetApp command line shortcuts

Just a few commands which I use frequently while on console.

CTRL+W = It deletes the word before cursor
CTRL+R = Rewrites the entire line you have entered
CTRL+U = Deletes the whole line
CTRL+A = Go to start of the line
CTRL+E = Go to end of the line
CTRL+K = Delete all the following texts

A few more commands are there but I feel arrow keys work better then you press these sequences like

CTRL+F = Right arrow
CTRL+B = Left arrow
CTRL+P = Up arrow
CTRL+N = Down arrow
CTRL+I = Tab key

Am I missing anything else?

Saturday, August 22, 2009

Failed disk replacement in NetApp

Disk failures are very common in storage environment and as a storage administrator we come across this situation very often, how often that depends how much disks your storage systems is having; more disks you manage more often you come across this situation.

This post I have written considering RAID-DP with FC-AL disks because it’s always better than RAID4 and SCSI loops we don’t use. Due to its design RAID-DP gives protection from double disk failure in a single raid group. To say that it means you will not loose data even if 2 disks are failed in a single RG at same time or one after another.

As like any other storage system Ontap also uses a disk from spare disks pool to rebuild the data from surviving disk as soon as it encounters a failed disk situation and sends an autosupport message to NetApp for parts replacement. Once autosupport is received by NetApp they initiate RMA process and part gets delivered to the address listed for that failed system in NetApp records. Once the disk arrives you change the disk by yourself or ask a NetApp engineer to come at onsite and change it, whatever way as soon as you replace the disk your system finds the newly working disk and adds it in spare pool.

Now wasn’t that pretty simple and straightforward? Oh yes; because we are using software based disk ownership and disk auto assignment is turned on. Much like your baby had some cold so he called-up GP himself and got it cured rather than asking you to take care of him, but what about if there are some more complication.

Now, will cover what all other things can come in way and any other complications.

Scenario 1:

I have replaced my drive and light shows Green or Amber but ‘sysconfig -r' still shows the drive as broken?

Sometimes we face this problem because system was not able to either label the disks properly or replaced disk itself is not good. The first thing we try is to label the disk correctly if that doesn’t work try replacing with another disk or known good disk but what if that too doesn’t work, just contact NetApp and follow their guidelines.

To label the disk from "BROKEN" to "SPARE" first you have to note down the broken disk id, which you can get from “aggr status -r", now go to advance mode with “priv set advanced” and run “disk unfail ” at this stage your filer will throw some 3-4 errors on console or syslog or snmp traps, depends on how you have configured but this was the final step and now disks should be good which you can confirm with “disk show ” for detailed status or “sysconfig -r” command. Give it a few seconds to recognize the changed status of disk if status change doesn’t shows at first.

Scenario 2:

Two disks have failed from same raid group and I don’t have any spare disk in my system.

Now in this case you are really in big trouble because always you need to have at least one spare disk available in your system whereas NetApp recommends 1:28 ratio i.e. have one spare on each 28 disks. In the situation of dual disk failure you have very high chances of loosing your data if another disk goes while you are rebuilding the data on spare disk or while you are waiting for new disks to arrive.

So always have minimum 2 disks available in your system one disk is also fine and system will not complain about spare disk but if you leave system with only one spare disk then maintenance centre will not work and system will not scan any disk for potential failure.

Now going to your above situation that you have dual disk failure with no spares available, so best bet is just ring NetApp to replace failed disk ASAP or if you think you are loosing your patient select same type of disk from another healthy system, do a disk fail, remove disk and replace it with failed disk on other system.

After adding the disk to another filer if it shows Partial/failed volume, make sure the volume reported as partial/failed belongs to newly inserted disk by using “vol status -v” and “vol status -r" commands, if so just destroy the volume with “vol destroy” command and then zero out the disk with “disk zero spares”.

This exercise will not take more than 15 min(except disk zeroing which depends on your disk type and capacity) and you will have single disk failure in 2 systems which can survive with another disk failure, but what if that doesn’t happens and you keep running your system with dual disk failure. Your system will shut down by itself after 24 hours; yes it will shut down itself without any failover to take, your attention. There is a registry setting to control how long your system should run after disk failure but I think 24hrs is a good time and you shouldn’t increase or decrease it until and unless you think you don’t care of the data sitting there and anyone accessing it.

Scenario 3:

My drive failed but there is no disk with amber lights

A number of times these things happen because disk electricals are failed and no more system can recognize it as part of it. So in this situation first you have to know the disk name. There are couple of methods to know which disk has failed.

a) “sysconfig -r “ look for broken disk list

b) From autosupport message check for failed disk ID

c) "fcadmin device_map" looks for a disk with xxx or “BYP” message

d) In /etc/messages look for failed or bypassed disk warning and there it gives disk ID

Now once you have identified failed disk ID run “disk fail ” and check if you see amber light if not use “blink_on ” in advanced mode to turn on the disk LED or if that that fails turn on the adjusting disk’s light so you can identify the disk correctly using same blink_on command. Alternatively you can use led_on command also instead of blink_on to turn on the disk LEDs adjacent to the defective disk rather than its red LED.

If you use auto assign function then system will assign the disk to spare pool automatically otherwise use “disk assign ” command to assign the disk to system.

Scenario 4:

Disk LED remains orange after replacing failed disk

This error is because you were in very hurry and haven’t given enough time for system to recognize the changes. When the failed disk is removed from slot, the disk LED will remain lit until the Enclosure Services notices and corrects it generally it takes around 30 seconds after removing failed one.

Now as you have already done it so better use led_off command from advanced mode or if that doesn’t works because system believes that the LED is off when it is actually on, so simply turn the LED on and then back off again using “led_on ” then “led_off ” commands.

Scenario 5:

Disk reconstruction failed

There could be a number of issues to fail the RAID reconstruction fail on new disk including enclosure access error, file system disk not responding/missing, spare disk not responding/missing or something else, however most common reason for this failure is outdated firmware on newly inserted disk.

Check if newly inserted disk is having same firmware as other disks if not first update the firmware on newly inserted disk and it then reconstruction should finish successfully.

Scenario 6:

Disk reconstruction stuck at 0% or failed to start

This might be an error or due to limitation in ONTAP i.e. no more than 2 reconstructions should be running at same time. Error which you might find a time is because RAID was in degraded state and system went through unclean shutdown hence parity will be marked inconsistent and need to be recomputed after boot. However as parity recomputation requires all data disks to be present in the RAID group and we already have a failed disk in RG so aggregate will be marked as WAFL_inconsistent. You can confirm this condition with “aggr status -r" command.

If this is the case then you have to run wafliron, giving command “aggr wafliron start ” while you are in advance mode. Make sure you contact NetApp before starting walfiron as it will un-mount all the volumes hosted in the aggregate until first phase of tests are not completed. As the time walfiron takes to complete first phase depends on lots of variables like size of volume/aggregate/RG, number of files/snapshot/Luns and lots of other things therefore you can’t predict how much time it will take to complete, it might be 1 hr or might be 4-5 hrs. So if you are running wafliron contact NetApp at fist hand.

Thursday, August 20, 2009

NetApp NFS mount for Sun Solaris 10 (64 bit)

In this post I have tried to cover mount options and other settings related to Solaris for higher throughput from NFS, which is more towards 64 bit although these settings apply to even 32 bit but a few extra settings gets counted when you think of 32 bit version, like super caching as I can remember because this list I have complied long back and still it's very handy to me when I get some complain about low performance. For any further details you can look in references section.

Mount options

rw,bg,hard,nointr,rsize=32768,wsize=32768,vers=3,proto=tcp

Kernel Tuning

Parameter

Replaced by (Resource Control)

Recommended Minimum Value

noexec_user_stack

NA

1

semsys:seminfo_semmni

project.max-sem-ids

100

semsys:seminfo_semmns

NA

1024

semsys:seminfo_semmsl

project.max-sem-nsems

256

semsys:seminfo_semvmx

NA

32767

shmsys:shminfo_shmmax

project.max-shm-memory

4294967296

shmsys:shminfo_shmmni

project.max-shm-ids

100

On Solaris 10, the following kernel parameters should be set to the shown value, or higher.

Solaris file descriptors

rlim_fd_cur – "Soft" limit on the number of file descriptors (and sockets) that a single process can have open

rlim_fd_max – "Hard" limit on the number of file descriptors (and sockets) that a single process can have open

Setting these values to 1024 is strongly recommended to avoid database crashes resulting from Solaris resource deprivation.

Network Settings

Parameter

Value

Details

/dev/tcp tcp_recv_hiwat

65,535

increases TCP receive buffer

/dev/tcp tcp_xmit_hiwat

65,535

increases TCP transmit buffer

/dev/ge adv_pauseTX

1

Enables transmit flow control

/dev/ge adv_pauseRX

1

Enables receive flow control

/dev/ge adv_1000fdx_cap

1

forces full duplex for GBE ports

/dev/tcp tcp_xmit_hiwat

65536

Increases TCP transmit high watermark

/dev/tcp tcp_recv_hiwat

65536

Increases TCP receive high watermark

sq_max_size Sets the maximum number of messages allowed for each IP queue (STREAMS synchronized queue). Increasing this value improves network performance. A safe value for this parameter is 25 for each 64MB of physical memory in a Solaris system up to a maximum value of 100. The parameter can be optimized by starting at 25 and incrementing by 10 until network performance reaches a peak.

Nstrpush – Determines the maximum number of modules that can be pushed onto a stream and should be set to 9

References

NetApp Technical Teport tr-3633, tr-3496, tr-3322,

NetApp Knowledge Base Article 7518