Yeah, it was a fool of me to believe on Google and think there can't be an error like that. Anyway I did upgrade the dated look of my blog, and with new look everything was fine, except comment section. I did try to restore the old config but all the restore methods failed even restore from template backup which I did saved before making change failed.
So guess what, as to this writing comments are only working from IE8, will check other browsers also will let you know if that does work.
Thanks,
Monday, May 28, 2012
pNFS in NetApp
In my last post I discussed about pNFS features and architecture,
and promised that in next post I will discuss about NetApp’s implementation of
pNFS. So here it is.
Since 1992, the time when NetApp was formed it is always known
for its solid NFS implementation and frontrunner of NFS design and standardization.
Today when NetApp has gone unified, and support not only NFS but SMB, FC,
iSCSI, FCOE and some other less known protocols also, still it is a major driving
force to NFS design. The latest example is pNFS where NetApp has proved again
by delivering pNFS to its ONTAP 8.1 Cluster Mode offering.
In my opinion they have done a wonderful job with their
implementation of pNFS, though currently it’s limited to support of file only,
but heck yeah it’s wonderful. In NetApp’s implementation of pNFS not only data
is distributed across all the nodes but metadata also. So if you have got
metadata intensive workload you can tackle that as well by using round robin
from your DNS infrastructure and instantly you have a cluster which not only
scales linearly with data nodes but metadata nodes also.
Another niche feature is live volume transition. It gives
flexibility to move a live volume in any node of cluster if ever required, that
too non-disruptively. Thanks to pNFS, it keeps same NFS handle while data being
served from new node so you don’t need to unmounts and remount the filesystem
and since we use cluster wide namespace no change is required at its directory
location also. Isn’t this sweet, no more struggling to get downtime to move a
volume, huh!
And the best part is that it’s built around industry standard
set by IETF, so there is no requirement of special client software and you can use
it along with NFS version 3, 4 or 4.1 without pNFS supported client.
The only setback from using it across your environment is
limited client support. As of today, only Fedora 16 with kernel 2.6.39 and RHEL
6.2 can be used, however soon we will see its support in other distributions
also as it’s already in kernel mainstream and it will not be long when other
distributions start shipping it.
References:
http://media.netapp.com/documents/wp-7153.pdf
http://media.netapp.com/documents/tr-4063.pdf
Sunday, May 27, 2012
pNFS
As
my nature, I was lazing around over the weekend when I noticed the newly
published best practice document on pNFS and ONTAP 8.1 Cluster
Mode. Soon I realized this is what c-mode is made for; parallelism. How? Let’s see that.
First
let’s start with what is Parallel NFS or in short ‘pNFS’?
pNFS is extension
of NFS version 4.1 which adds support of parallelism to existing NFS version 4.
Well,
that was the shortest answer I could write however here’s little more detail on
it.
pNFS
is part of NFS 4.1, the second minor version to NFS version 4 which adds support
of session, and directory delegation, along-with parallelism. The idea to use SAN
filesystem architecture and parallelism in NFS was originated from Gary Grider of
Los Alamos National Lab and Lee Ward of Sandia
National Lab, later it was presented by Gearth Gibson, a professor at Carnegie Mellon
University and founder and CTO of Panasas, Brent Welch of Panasas, and Peter
Corbett of NetApp in a problem
statement to Internet Engineering Task Force (IETF) in 2004. Later, in 2005
NFSv4 working group of IETF commended drafts and in 2006 it folded into the 4.1
minor version draft. It’s published under RFC 5661 describing NFS version
4.1 with parallel support and RFC
5662 detailing protocol definition codes. pNFS is not limited to file, support
for block data (RFC 5663) and
object based data (RFC 5664) are
also added, so now it’s possible to access not only file but object (OSD) and
block (FC/iSCSI) based storage also over NFS.
pNFS,
being open source, does not require additional software or drivers on the
client that are proprietary to enable. Therefore, the different varieties of
NFS can coexist at the same time and supported NFS clients can mount the same
file system over NFSv3, NFSv4, and NFSv4.1/pNFS.
It
is widely accepted by industry and jointly developed by Panasas, NetApp, Sun,
EMC, IBM, UMich/CITI and many more however at the time of writing only Fedora
16 with kernel 2.6.39 supports all three layout types (blocks, files and objects)
whereas RHEL 6.2 support only files layout.
Now question comes,
what was the need for it?
We
all love NFS for its simplicity, and from the time it was designed by Sun in
the era of 10Mb Ethernet, it scaled well to 100Mb and then gigabit Ethernet, however
with the advent of 10Gb and 40Gb links, single stream designed protocol wasn’t
enough to scale it further. Industry has already used TOE Ethernet cards, link
aggregation and bigger boxes but that wasn’t sufficient to utilize the
bandwidth and CPU powers we have available now. So what was left to deal with?
Parallel NFS.
So, what’s so different
from earlier version of NFS?
pNFS
is not much different from its ancestors it just separates metadata from data. Unlike
traditional NFS Versions 3, 4, and 4.1, where metadata and data are shared on
the same I/O path, with pNFS, metadata and data travels on different I/O paths.
It allows metadata server handles all the metadata activities from the client,
while the data servers provide a direct path for data access.
+-----------+
|+-----------+ +-----------+
||+-----------+ | |
||| | NFSv4.1 + pNFS |
Metadata |
+||
Clients |<----------Metadata------------>| Server
|
+| | | |
+-----------+ |
|
|||
+-----------+
Data |
||| |
||| Storage +-----------+ |
||| Protocol |+-----------+ |
||+----------------||+-----------+
Control |
|+-----------------||| |
Protocol|
+------------------+|| Data |------------+
+| Server |
+-----------+
Figure 1: pNFS
Architecture
As
a result of this, in a clustered storage system with multiple nodes you mount
only a directory or root namespace but you get direct access to data from each
nodes. The metadata server (MDS) handles all nondata traffic such as GETATTRs,
SETATTRs, ACCESS, LOOKUPs, and so on. Data servers (DSs) store file data and
respond directly to client read and write requests. A control protocol is used
to provide synchronization between the metadata server and data server.
Ok, but where does
pNFS add value?
Large
files and high number of concurrent users. With the advent of parallel
computing with multi node cluster, single job gets divided amongst n nodes and when a job arrives at
computational nodes they all try to access same data from one storage location
which soon becomes bottleneck however with pNFS multiple storage system nodes responds
with parts of file in parallel, increasing the aggregated bandwidth and
lowering the latency. At the same time when many small files are accessed by large
number of concurrent users single storage system can get chocked however with
pNFS all nodes of storage system hosting the file, share user load.
Great, but how all
this work?
In
principal, pNFS uses parallelism used by RAID-0 however at different level. As in
RAID-0 data is spread across multiple disks for faster response, same way in
pNFS one file/filesystem is spread across multiple nodes in clustered storage
array for faster response.
For
example when client sends read requests for a file to storage system, storage
system replies with file metadata along with layout details, detailing node
address, data location, and striping information, after getting layout details
client knows list of cluster nodes having parts of the file. Now client directly
contacts to all the data nodes simultaneously for the file and nodes reply with
the parts of files they have which clients later assembles.
I
think It’s enough for now, next post will detail about pNFS implementation by NetApp
Read
for scholars:
Wednesday, May 23, 2012
How to quit SSH session by typing 'exit' in ONTAP
Just a little trick to remove annoyance for new NetApp admin.
Usually when we want to terminate a SSH session on ONTAP you type Ctrl+D as typing 'exit' returns with message to press 'Ctrl+D' to exit. It's perfectly fine and there isn't any harm as that gracefully closes the session but as most of us are more use to of typing 'exit' to terminate the session, find it odd.
So for those who prefer typing 'exit' to close the session can do so by enabling telnet.distinct.enable option.
Default setting for this option is off which in effect mirrors telnet and console session but if you don't use this feature, which I believe most of us don't, can enable it and feel more at home.
Default setting for this option is off which in effect mirrors telnet and console session but if you don't use this feature, which I believe most of us don't, can enable it and feel more at home.
Saturday, May 19, 2012
I am back
Yeah, I was absense from blogesphere for some time but guess what it was worth that because during my last a year and half I have learned so much about NetApp and its related technologies that I really fealt what a n00b I was.
Anyway I must say thankyou to all of you who encouraged me and patiently waited for my return.
Thanks.
Anyway I must say thankyou to all of you who encouraged me and patiently waited for my return.
Thanks.
Subscribe to:
Posts (Atom)