Typically, Fibre Channel SANs are most suitable for large data centers running business-critical data, as well as applications that require high-bandwidth performance such as medical imaging, streaming media, and large databases. Fibre Channel SAN solutions can easily scale to meet the most demanding performance and availability requirements.
The increased performance of Fibre Channel enables a highly effective backup and recovery approach, including LAN-free and server-free backup models. The result is a faster, more scalable, and more reliable backup and recovery solution. By providing flexible connectivity options and resource sharing, Fibre Channel SANs also greatly reduce the number of physical devices and disparate systems that must be purchased and managed, which can dramatically lower capital expenditures. Heterogeneous SAN management provides a single point of control for all devices on the SAN, lowering costs and freeing personnel to do other tasks.
Development started in 1988, ANSI standard approval occurred in 1994, and large deployments began in 1998. Fibre Channel is a mature, safe, and widely deployed solution for high-speed (1 GB, 2 GB, 4 GB) communications and is the foundation for the majority of SAN installations throughout the world.
Fibre Channel is a well-established, widely deployed technology with a proven track record and a very large installed base, particularly in high-performance, business-critical data center environments. Fibre Channel SANs continue to grow and will be enhanced for a long time to come. The reduced costs of Fibre Channel components, the availability of SAN kits, and the next generation of Fibre Channel (4 GB) are helping to fuel that growth. In addition, the Fibre Channel roadmap includes plans to double performance every three years
Benefits include twice the performance with little or no price increase, investment protection with backward compatibility to 2 GB, higher reliability due to fewer SAN components (switch and HBA ports) required, and the ability to replicate, back up, and restore data more quickly. 4 GB Fibre Channel systems are ideally suited for applications that need to quickly transfer large amounts of data such as remote replication across a SAN, streaming video on demand, modeling and rendering, and large databases. 4 GB technology is shipping today.
Fibre Channel and iSCSI each have a distinct place in the IT infrastructure as SAN alternatives to DAS. Fibre Channel generally provides high performance and high availability for business-critical applications, usually in the corporate data center. In contrast, iSCSI is generally used to provide SANs for business applications in smaller regional or departmental data centers.
For environments consisting of high-end servers that require high bandwidth or data center environments with business-critical data, Fibre Channel is a better fit than iSCSI. For environments consisting of many midrange or low-end servers, an IP SAN solution often delivers the most appropriate price/performance.
With a SAN, the storage units can be secured separately from the servers and totally apart from the user network enhancing storage access in data blocks (bulk data transfers), advantageous for server-less backups.
Depending on how we configure the array, we can have the
- data mirrored [RAID 1] (duplicate copies on separate drives)
- striped [RAID 0] (interleaved across several drives), or
- parity protected [RAID 5](extra data written to identify errors).
These can be used in combination to deliver the balance of performance and reliability that the user requires.
RAID (Redundant array of Independent Disks) is a technology to achieve redundancy with faster I/O. There are Many Levels of RAID to meet different needs of the customer which are: R0, R1, R3, R4, R5, R10, R6.
Generally customer chooses R5 to achieve better redundancy and speed and it is cost effective.
R0 – Striped set without parity/[Non-Redundant Array].
Provides improved performance and additional storage but no fault tolerance. Any disk failure destroys the array, which becomes more likely with more disks in the array. A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the drive. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, giving this type of arrangement huge bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss
RAID 0+1 (Mirrored Stripped)
RAID 1+0 (Stripped Mirrored)
It is a collection of disks that share a common connection to the server, but don’t include the mirroring,
striping, or parity facilities that RAID systems do, but these capabilities are available with host-based software.
RAID: “Redundant Array of Inexpensive Disks”
Fault-tolerant grouping of disks that server sees as a single disk volume
Combination of parity-checking, mirroring, striping
Self-contained, manageable unit of storage
Drives independently attached to the I/O channel
Scalable, but requires server to manage multiple volumes
Do not provide protection in case of drive failure
Massively extended scalability
Greatly enhanced device connectivity
Server-less (active-fabric) backup
Heterogeneous data sharing
Disaster recovery - Remote mirroring
While answering people do NOT portray clearly what they mean & what advantages each of them have, which are cost effective & which are to be used for the client's requirements.
The basic difference between SAN and NAS, SAN is Fabric based and NAS is Ethernet based.
SAN - Storage Area Network
- Fabric Switch
- FC Controllers
JBOD: Just Bunch of Disks is Storage Box, it consists of Enclosure where set of hard-drives are hosted in many combinations such SCSI drives, SAS, FC, SATA.
There are many management software’s used for managing SAN's to name a few
- IBM Tivoli Storage Manager.
- CA Unicenter.
- Veritas Volumemanger.
Generally the default ID for SCSI HBA is 7.
SCSI- Small Computer System Interface
HBA - Host Bus Adaptor
In some scenarios you are supposed to install Operating System on the drives connected thru SCSI HBA or SCSI RAID Controllers, but most of the OS will not be updated with drivers for those controllers, that time you need to supply drivers externally, if you are installing windows, you need to press F6 during the installation of OS and provide the driver disk or CD which came along with HBA.
If you are installing Linux you need to type "linux dd" for installing any driver.
Array is a group of Independent physical disks to configure any Volumes or RAID volumes.
SCENARIO 1: How do you find/debug when there is error while working SCSI devices?
In our daily SAN troubleshooting there are many management and configuration tools we use them to see when there is a failure with target device or initiator device.
Some time it is even hard to troubleshoot some of the things such as media errors in the drives, or some of the drives taking long time to spin-up. In such cases these utilities will not come to help. To debug this kind of information most of the controller will be implemented with 3-pin serial debug port. With serial port debug connector cable you can collect the debug information with hyper terminal software.
SCENARIO 2: I am having an issue with a controller its taking lot of time to boot and detect all the drives connected how can I solve this.?
There are many possibilities that might cause this problem. One of the reason might be you are using bad drives that cannot be repaired. In those cases you replace the disks with working ones.
Another reason might be slots you connected your controller to a slot which might not be supported.
Try to connect with other types of slots.
One more probable reason is if you have flashed the firmware for different OEM’s on the same hardware.
To get rid of this the flash utilities will be having option to erase all the previous and EEPROM and boot block entry option. Use that option to rectify the problem.
SCENARIO 3: I am using tape drive series 700X, even the vendor information on the Tape drive says 700X, but the POST information while booting the server is showing as 500X what could be the problem?
First you should make sure your hardware is of which series, you can find out this in the product website.
Generally you can see this because in most of the testing companies they use same hardware to test different series of same hardware type. What they do is they flash the different series firmware. You can always flash back to exact hardware type.
SAN can be connected in 3 types which are mentioned below:
Point to Point topology
FC Arbitrated Loop ( FC :Fibre Channel )
There are states of RAID arrays that represent the status of the RAID arrays which are given below
CRC: Cyclic redundancy check
There are many types of tape media available to back up the data some of them are
DLT: digital linear tape - technology for tape backup/archive of networks and servers; DLT technology addresses midrange to high-end tape backup requirements.
LTO: linear tape open; a new standard tape format developed by HP, IBM, and Seagate.
AIT: advanced intelligent tape; a helical scan technology developed by Sony for tape backup/archive of networks and servers, specifically addressing midrange to high-end backup requirements.
HA High Availability is a technology to achieve failover with very less latency. Its a practical requirement of data centers these days when customers expect the servers to be running 24 hours on all 7 days around the whole 365 days a year - usually referred as 24x7x365. So to achieve this, a redundant infrastructure is created to make sure if one database server or if one app server fails there is a replica Database or Appserver ready to take-over the operations. End customer never experiences any outage when there is a HA network infrastructure.
Virtualization is logical representation of physical devices. It is the technique of managing and presenting storage devices and resources functionally, regardless of their physical layout or location. Virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in a storage area network (SAN). The management of storage devices can be tedious and time-consuming. Storage virtualization helps the storage administrator perform the tasks of backup, archiving, and recovery more easily, and in less time, by disguising the actual complexity of the SAN.
Frame header (includes destination id and source id, 24 bytes/6 words)
Data Payload (encapsulate SCSI instruction can be 0-2112 bytes in length)
CRC (error checking, 4 bytes)
End of Frame (1 byte)
c) Fibre Channel
b) Internet SCSI (iSCSI)
c) Fibre Channel IP (FCIP)
In class 1 service, a dedicated connection source and destination is established through the fabric for the duration of the transmission. It provides acknowledged service. This class of service ensures that the frames are received by the destination device in the same order in which they are sent, and reserves full bandwidth for the connection between the two devices. It does not provide for a good utilization of the available bandwidth, since it is blocking another possible contender for the same device. Because of this blocking and necessary dedicated connection, class 1 is rarely used.
Class 2 is a connectionless, acknowledged service. Class 2 makes better use of available bandwidth since it allows the fabric to multiplex several messages on a frame-by-frame basis. As frames travel through the fabric they can take different routes, so class 2 service does not guarantee in-order delivery. Class 2 relies on upper layer protocols to take care of frame sequence. The use of acknowledgments reduces available bandwidth, which needs to be considered in large-scale busy networks.
There is no dedicated connection in class 3 and the received frames are not acknowledged. Class 3 is also called datagram connectionless service. It optimizes the use of fabric resources, but it is now upper layer protocol to ensure that all frames are received in the proper order, and to request to the source device the retransmission of missing frames. Class 3 is a commonly used class of service in Fibre Channel networks.
Class 4 is a connection-oriented service like class 1, but the main difference is that it allocates only a fraction of available bandwidth of path through the fabric that connects two N_Ports. Virtual Circuits (VCs) are established between two N_Ports with guaranteed Quality of Service (QoS), including bandwidth and latency. Like class 1, class 4 guarantees in-order delivery frame delivery and provides acknowledgment of delivered frames, but now the fabric is responsible for multiplexing frames of different VCs. Class 4 service is mainly intended for multimedia applications such as video and for applications that allocate an established bandwidth by department within the enterprise. Class 4 was added in the FC-PH-2 standard.
Class 6 is a variant of class 1, known as multicast class of service. It provides dedicated connections for a reliable multicast. An N_Port may request a class 6 connection for one or more destinations. A multicast server in the fabric will establish the connections and get acknowledgment from the destination ports, and send it back to the originator. Once a connection is established, it should be retained and guaranteed by the fabric until the initiator ends the connection. Class 6 was designed for applications like audio and video requiring multicast functionality. It appears in the FC-PH-3 standard.
Class F service is defined in the FC-SW and FC-SW-2 standard for use by switches communicating through ISLs. It is a connectionless service with notification of non-delivery between E_Ports used for control, coordination, and configuration of the fabric. Class F is similar to class 2; the main difference is that Class 2 deals with N_Ports sending data frames, while Class F is used by E_ports for control and management of the fabric.
b) Number of devices that can be interconnected (16)
c) Fabric Address Notification
d) Registered state change notification
e) Broadcast Servers
WWN: 64bit address that is hard coded into a fibre channel HBA and this is used to identify individual port (N_Port or F_Port) in the fabric.
b) Arbitrary Loop
c) Switched Fabric Loop
b) FC Encoder and Decoder
c) FC Framing and Flow control
d) FC Common Services
e) FC Upper Level Protocol Mapping
a) Software Zoning
b) Hardware Zoning
b) WWN Level zoning
c) Device Level zoning
d) Protocol Level zoning
e) LUN Level zoning
b) Port Multiplier
c) Port Selector
b) Loop Monitoring
c) Loop arbitration
d) Open Loop
e) Close Loop
b) Helical Scan Recording.
c) Near Line
Redundancy Functions Relationships Role
Mirroring Generates 2 ios to 2 storage targets Creates 2 copies of data
Routing Determined by switches independent of SCSI Recreates n/w route after a failure
Multipathing Two initiator to one target Selects the LUN initiator pair to use
b) Advanced Intelligent Tape
c) Linear Tape Open
i) One sequence to transfer the command
ii) One or more sequence to transfer the data
iii) Once sequence to transfer the status.
Example: Exchange exist to transfer the command, data and the status of one SCSI task
Process Login: To establish the SCSI operating environment between two N_PORTS
Fabric Login: Similar to port login, FLOGI is an extended link service command that sets up a session between two participants. With FLOGU a session is created between an N_Port or NL_Port and the switch.
b) High Performance Clusters
c) Load Balancing Clusters.
b) Network Level Management
c) Enterprise Level Management
b) Capacity, Content and quota management
c) Quality of Service
Cables used in the n/w
n/w protocols (TCP/IP, IPx) and file sharing protocols (CIFS & NFS)
Support heterogeneous clients
High-speed connectivity such as FC
Do not use n/w protocols because data request are not made over LAN
Requires special s/w to provide access to heterogeneous clients
access services to computer systems. A NAS Storage Element consists of an interface or engine, which implements the file services, and one or more devices, on which data is stored. NAS elements may be attached to any type of network. When attached to SANs, NAS elements may be considered to be members of the SAS (SAN Attached Storage) class of storage elements.
A class of systems that provide file services to host computers. A host system that uses network attached storage uses a file system device driver to access data using file access protocols such as NFS or CIFS. NAS systems interpret these commands and perform the internal file and device I/O operations necessary to execute them.
Though the NAS does speed up bulk transfers, it does not offload the LAN like a SAN does. Most storage devices cannot just plug into gigabit Ethernet and be shared - this requires a specialized file server the variety of supported devices is more limited.NAS has various protocols established for such needed features as discovery, access control, and name services.
SANs expand easily to keep pace with fast growing storage needs
SANs allow any server to access any data
SANs help centralize management of storage resources
SANs reduce total cost of ownership (TCO).
· Traditionally expensive SCSI controllers and SCSI disks no longer need to be used in each server, reducing overall cost.
· Many iSCSI arrays enable the use of cheaper SATA disks without losing hardware RAID functionality.
· The iSCSI storage protocol is endorsed by Microsoft, IBM and Cisco, therefore it is an industry standard.
· Administrative/Maintenance costs are reduced.
· Increased utilisation of storage resources.
· Expansion of storage space without downtime.
· Easy server upgrades without the need for data migration.
· Improved data backup/redundancy.