All of sudden today I got a question "Is there any quick formula to calculate the usable capacity for NetApp filer?" in all my career I never had to come across for any such kind of formula so curiously I started hunting on web and found a number of sites offering such tool but most of them either dead or pointing to a tool developed by Nick Bernstein. This is a small tool available for free download, where you have to select the disk, raid and some other options and you get a good numbers of usable space from your raw space. Although it gives a very close figures and sometime actual number, but I found it's not all, as there are some more constraints which one has to keep in mind while quoting usable space.
So I started hunting more on net and came across one blog from Jim of HP, where he is criticising the NetApp's not so called clear approach of giving any formula for same and after that a number of following comments by some NetApp big-shots.
After going through post and accusations made by each other on their product I found an easy to understand formula with example posted by NetApp guy. Now below is the formula given by them where I have added my points which I think is required while calculating usable size.
So first thing first, as we all know that space advertised by manufacturers are in base10 formula which is just a marketing funda but we use in base2 formula which is the actual space your system sees once you connect the disk. So I will use GB forms in base2 which is the actual space we can use.
Let's take example of 20 FC disk of 144GB which after converting in base2 number comes to (136000MB/ 1024) 132.8GB x 20 = 2656 GB
(You can check the base2 size of each disk in sysconfig -r command)
Now we will use the disks in Raid-DP so each raid group will reserve 2 disks as parity, so now we have space left-over with 20 - 2 x 132.8 = 2390 GB
(Please note that here I have taken the example of FC disk where maximum number of disks limit per RG is 28, check the NetApp Storage management guide under topic Maximum and Default Raid Group Size or see the online version on now site here)
As NetApp system stores an additional checksum of 8bytes for every 512 bytes of data, which is ~1.5% overhead or 35GB in this case. So we are leftover with 2355GB.
Now reduce 10% for WAFL overhead, so it comes to 2120GB
Now change the default aggregate snapshot reserve from 2% to 0%. Why? Because aggregate level snapshots are primarily used for metrocluster.
So to summarize that let's see the an easy step by step calculation formula
1. Check available disk capacity in base2 number (a)
2. Number of disks - hot spares - Parity disk required by raid group (depends on raid type and disk type+number) = number of disks will be used in aggregate (b)
3. a * b = Raw space available (c)
4. c - 1.5% = Space after additional checksum overhead (d)
5. d - 10% = Space after WAFL Overhead (e)
So in nutshell 88.65% is usable space after multiplying with the value of raw space in base 2 value
Link from netapp from raw to usable conversion a good read if you want to know anything further
On SATA disks space used for checksum in BCS type is more than 11%, but if you use ZCS the net loss from checksums is a little under 2% which is consumed from the wafl reserve
Post updated following changes in release of ONTAP 8.1.