Monday 24 October 2011

Difference between UDP and TCP

There are two types of internet protocol (IP) traffic, and both have very different uses.

TCP(Transmission Control Protocol)

TCP is a connection-oriented protocol, a connection can be made from client to server, and from then on any data can be sent along that connection.
  1. Reliable - when you send a message along a TCP socket, you know it will get there unless the connection fails completely. If it gets lost along the way, the server will re-request the lost part. This means complete integrity, things don't get corrupted.
  2. Ordered - if you send two messages along a connection, one after the other, you know the first message will get there first. You don't have to worry about data arriving in the wrong order.
  3. Heavyweight - when the low level parts of the TCP "stream" arrive in the wrong order, resend requests have to be sent, and all the out of sequence parts have to be put back together, so requires a bit of work to piece together.
UDP(User Datagram Protocol). 

A simpler message-based connectionless protocol. With UDP you send messages(packets) across the network in chunks.

  1. Unreliable - When you send a message, you don't know if it'll get there, it could get lost on the way.
  2. Not ordered - If you send two messages out, you don't know what order they'll arrive in.
  3. Lightweight - No ordering of messages, no tracking connections, etc. It's just fire and forget! This means it's a lot quicker, and the network card / OS have to do very little work to translate the data back from the packets.

Please write your valuable comments, About This Blogspot.

Thursday 13 October 2011

RAID-DP

RAID-DP (Double Parity)

Raid DP is a Double Parity disk and this implementation that prevents data loss when two disk fail.

In Raid DP minimum 3 disks required to create one aggregate.

It will support multi disk failure.

In NetApp raid DP is default raid group.        

Example:

filer1> aggr create -r 5 aggr1 10

In this command "aggr create -r 5 aggr1 10"
aggr1 will randomly take 10 disk from spare disk and -r 5 which means each raid group should have 5 disks. 

So obviously here two raid group will create and each raid group should have 5 disks.

d1  d2  d3  P(d4)  DP (d5) => rg0 
d6  d7  d8   P(d 9)  DP (d10) => rg1



Note: 
d1 - d10 which means disk name
P for parity disk
DP for second parity disk
rg0 and rg1 are raid group 0 and 1


Wednesday 12 October 2011

Raid 4

Raid 4


  • It will protect against the loss of just one disk per raid group.
  • RAID 4 will not "protect against multiple disk failures"  
  • If suppose some disk failed in different raid group, there won't be any data loss.

In Raid 4 Required minimum 2 disks required to create one aggregate

One disk for data disk
One disk for parity disk

RAID 4

Example:

filer>Aggr create -r 5 -t raid4 aggr1 10

aggr1 will randomly take 10 disk from spare disk and -r 5 which means each raid group should have 5 disks
it will create two raid group and each raid group have 5 disks.



http://netapplines.blogspot.in/2013/07/when-any-disk-failed-in-netapp-how-its.html

Tuesday 11 October 2011

NetApp Raid

RAID-Redundant Array Of Independent Disks

In all the Raid in NetApp having minimum data disk and minimum parity disk
Disk Types

Data      : Holds data stored within the RAID group

Spare    : Does not hold usable data but is available to be added to a RAID group in an aggregate,
                also known as a hot spare

Parity    : 
Store data reconstruction information within the RAID group

dParity  : Stores double-parity information within the RAID group, if RAID-DP is enabled

These are type of disks in filler

DISK TYPE
RAID-DP
RAID4
ATA/BSAS/SATA
                       Maximum
16
7
FC/SAS
                        Maximum
28
14
ATA/BSAS/SATA
                            Default
14
4
FC/SAS
                         Default
16
7
ATA/BSAS/SATA
                        Minimum
3
2
FC/SAS
                      Minimum
3
2


In Netapp two raid supported

  1. Raid4
  2. Raid DP
In NetApp default  raid type is Raid Dp.




Please write your valuable comments, About This Blogspot.

Monday 10 October 2011

Basic Filler Management

Filerview (http(s))
Console cable
telnet
ssh - (Secure shell)
rsh - ( remote shell)
Windows MMC (Computer management snap-in)


Most day-to-day activities can be performed via the web interface

Command-line interface: not-so-commonly-used commands, eg “snap restore” + many more commands

2 most commonly used commands: “sysconfig” & “options”

Sunday 9 October 2011

Filer Disks

Data disks
Spare disks
Parity disks
Double parity disks
Broken and failed disks
Maximum number of RAID groups in an aggregate : 150

Maximum number of RAID groups on a storage system : 400

These are type of disks in filler

Data ONTAP 8.0.1 Default and Maximum RAID Group Size by Drive Type
Drive Type
RAID Type
Default RAID Group Size
Maximum RAID Group Size
SSD
RAID-DP (default)
23 (21+2)
28 (26+2)
RAID 4
8 (7+1)
14 (13+1)
SAS/FC
RAID-DP (default)
16 (14+2)
28 (26+2)
RAID 4
8 (7+1)
14 (13+1)
SATA
RAID-DP (default)
14 (12+2)
20 (18+2)
RAID 4
7 (6+1)
7 (6+1)

Disks

Disk name

Normally the disk will reside in a disk enclosure, In dish self disk should be like 2a.17 depending on the type of disk enclosure.
  • 2a = SCSI adapter 
  • 17 = 17th disk in the row

Disk Types

Data      : Holds data stored within the RAID group

Spare    : Does not hold usable data but is available to be added to a RAID group in an aggregate,
                also known as a hot spare

Parity    : 
Store data reconstruction information within the RAID group

dParity  : Stores double-parity information within the RAID group, if RAID-DP is enabled

These are type of disks in filler

DISK TYPE
RAID-DP
RAID4
ATA/BSAS/SATA
                       Maximum
16
7
FC/SAS
                        Maximum
28
14
ATA/BSAS/SATA
                            Default
14
4
FC/SAS
                         Default
16
7
ATA/BSAS/SATA
                        Minimum
3
2
FC/SAS
                      Minimum
3
2

Disk Commands

Display

> disk show
> disk show <disk_name>

disk_list

> sysconfig -r
> sysconfig -d


//  list all unnassigned/assigned disks
disk show -n
disk show -a

Adding (assigning)

Add a specific disk to pool1 the mirror pool
disk assign <disk_name> -p 1

// Assign all disk to pool 0, by default they are assigned to pool 0 if the "-p"
// option is not specified
> disk assign all -p 0

Remove (spin down disk)
> disk remove <disk_name> 

Reassign

> disk reassign -d <new_sysid>

Replace

> disk replace start <disk_name> <spare_disk_name>
> disk replace stop <disk_name>

Note: when data copying from the one disk to another disk (like failed disk to spare disk), you can stop this process using the stop command

Zero spare disks

> disk zero spares

fail a disk

> disk fail <disk_name>

Scrub a disk

> disk scrub start
> disk scrub stop

Sanitize
Note: the release modifies the state of the disk from sanitize to spare.
Sanitize requires a license.


> disk sanitize start <disk list>
> disk sanitize abort <disk_list>
> disk sanitize status
> disk sanitize release <disk_list>

Maintanence

> disk maint start -d <disk_list>
> disk maint abort <disk_list>
> disk maint list
> disk maint status

Note: you can test the disk using maintain mode 

swap a disk

> disk swap
> disk unswap

Note: it stalls all SCSI I/O until you physically replace or add a disk,
can used on SCSI disk only.

Statisics

disk_stat <disk_name>

Simulate a pulled disk

disk simpull <disk_name>

Simulate a pushed disk
disk simpush -l
disk simpush <complete path of disk obtained from above command>

Example
> disk simpush -l
The following pulled disks are available for pushing:
v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448

> disk simpush v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448


Please write your valuable comments, About This Blogspot.

Saturday 8 October 2011

RLM - Remote Lan Module

Rlm is console port to connect the filer, Rlm is a separate device in the FAS chassis... it has it's own IP and is independent of the FAS so it keeps running any time. Rlm is connects to the motherboard and install separately When a FAS fails over it's RLM keeps working locally.

Commands for RLM

Filer> rlm

Filer> rlm help

Display a list of Remote LAN Module (RLM) commands.

Filer> rlm reboot

Causes the RLM to reboot. If your console connection is through the RLM it will be dropped. The reboot command forces a Remote LAN Module (RLM) to reset itself and perform a self-test.

Filer> rlm setup

Interactively configure a Remote LAN Module (RLM).

Filer> rlm status

Remote LAN Module           Status: Online
            Part Number:        101-000457
            Revision:           F0
            Serial Number:      591541
            Firmware Version:   4.0
            Mgmt MAC Address:   00:B0:98:11:99:D6
            Ethernet Link:      up
            Using DHCP:         no
            IP Address:         10.26.38.154
            Netmask:            255.255.255.0
            Gateway:            10.26.38.254

Filer> rlm test autosupport

Performs autosupport test on the Remote LAN Module (RLM). The autosupport test forces a Remote LAN Module (RLM) to send a test autosupport to all email addresses in the option list autosupport.to.

Filer> rlm update

The RLM firmware is updated. This may be a time consuming operation. Before issuing this command , you need to execute the `software install’ command to get the new firmware image. The RLM will be rebooted at the end of this operation.

filer> software update http://Web_server/RLM_FW.zip -f
filer> rlm update
filer> priv set advanced
filer> rlm update -f
filer> priv set


Please write your valuable comments, About This Blogspot.

Friday 7 October 2011

Netapp Hardware Connection

This Diagram is Netapp High-Availability


Controller 1 : Filer 1
Controller 2 : Filer 2

Controller 1 Active Shelves : Disk shelf
Controller 2 Active Shelves : Disk shelf

Switch/Fabric : Fibre channel Switch (brocade or cisco)

host : may be Windows or unix server

Advantage Netapp High-Availability :


  • If any Hardware failure in this structure, the another hardware will take over that.
  • For Example: If netapp controller 1 got failed means in the next second controller 2 will take over the controller 1 resources like LUN, CIFS and NFS Shares.



Filer back side view






Netapp FAS Hardware Views




Thursday 6 October 2011

NetApp architecture


The NetApp architecture consist of hardware, Data ONTAP operating system and the network. I have already shown you a diagram of a common NetApp setup but now i will go into more detail.

Hardware
NetApp have a number of filers that would fit into any company and cost, the filer itself may have the following
  • can be a Intel or AMD server (up to 8 dual core processors) 
  • can have dual power supplies 
  • can handle up to 64GB RAM and 4GB NVRAM (non-volatile RAM) 
  • can manage up to 1176GB storage 
  • has a maximum limit of 1176 disk drives 
  • can connect the disk shelves via a FC loop for redundancy 
  • can support FCP, SATA and SAS disk drives 
  • has a maximum 5 PCI and 3 PCI-express slots 
  • has 4/8/10GbE support 
  • 64bit support
The filer can be attached to a number of disk enclosures (shelves) which expands the storage allocation, these disk enclosures are attached via FC, as mentioned above the disk enclosures can support the following disks

FCP       These are fibre channel disks, they are very fast but expensive
SAS       Serial attached SCSI disks again are very fast but expensive , due to replace the FC disks
SATA    Serial ATA are slow disks but are cheaper, ideal for QA and DEV environments


One note to remember is that the filer that connects to the top module of a shelf controls (owns) the disks in that shelf under normal circumstances (i.e. non-failover).

The filers can make use of VIF's (Virtual Interfaces), they come in two flavors

Single-mode VIF

  • 1 active link, others are passive, standby links 
  • Failover when link is down
  • No configuration on switches


Multi-mode VIF

  • Multiple links are active at the same time 
  • Loadbalancing and failover 
  • Loadbalancing based on IP address, MAC address or round robin
  • Requires support & configuration on switches

Software

I have already touched on the operating system Data ONTAP, the latest version is currently version 8 which fully supports grid technology (GX in version 7). It is fully compatible with Intel and AMD architectures and supports 64bit, it borrows the idea's from FreeBSD.

All additional NetApp products are activated via licenses, some require the filer to be rebooted so check the documentation.

Management of the filer can be accessed via any of the following
  • Telnet or SSH
  • Filerview (HTTP GUI)
  • System Manager (client software GUI)
  • Console cable
  • snmp and ndmp
Storage Terminology

When talking about storage you probably come across two solutions

NAS (Network Attached Storage)
   NAS storage speaks to a file, so the protocol if a file based one. Data is made to be shared examples are
  • NFS (Unix)
  • CIFS or SMB (Windows)
  • FTP, HTTP, WebDAV, DAFS
SAN (Storage Area Network)

SAN storage speaks to a LUN (Logical Unit Number) and accesses it via data blocks, sharing is difficult examples are
  • SCSI
  • iSCSI
  • FCAL/FCP


There are a number of terminologies associated with the above solutions, I have already discussed some of them in my EMC section

    Terminology
Solution
Description
    share/export
NAS
CIFS servers makes data available via shares, a Unix server makes data available via exports
    Drive                     g  mapping/mounting
NAS
CIFS clients typically map a network drive to access data stored on a storage server, Unix clients typically mount the remote resource
    LUN
SAN
Logical Unit Number , basically a disk presented by a SAN to a host, when attached it looks like a locally attached disk.
    Target
SAN
The machine that offers a disk (LUN) to another machine in other words the SAN
    Initiator
SAN
The machine that expects to see the disk (LUN) the host OS, appropriate initiator software will be required
    Fabric
SAN
One or more fibre switches with targets and initiators connected to them are referred to as a fabric. Cisco, McData and Brocade are well know fabric switch makers
See my EMC architecture section for more details
    HBA
SAN
Host Bus Adapter, the hardware that connects the server or SAN to the fabric switches. There are also iSCSI HBA's
    Multipathing (MPIO)
SAN
The use of redundant storage network components responsible for transfer of data between the server and the storage (Cabling, adapters, switches and software)
    Zoning
SAN
The partioning of a fabric into smaller subsets to restrict interference, added security and simplify management, it's like VLAN's in networking
See my EMC zoning section for more details
          

                                            





NetApp Terminology

Now that we know how a NetApp is configured from a hardware point of view, we now need to know how to present the storage to the outside world, first some NetApp terminologies explained


Aggregate:
A collection of disks that can have either of the below RAID levels, the aggregate can contain up to 1176 disks, you can have many aggregates with the below different RAID levels. An aggregate can contain many volumes (see volumes below).
  • RAID-4
  • RAID-DP (RAID-6) better fault tolerance
One point to remember is that a aggregate can grow but cannot shrink, the disadvantage with RAID 4 is that a bottleneck can happen on the dedicated parity disk, which is normally the first disk to fail due to it being used the most, however the NVRAM helps out by only writing to disks every 10 seconds or when the NVRAM is 50% full.
Raid Group (Pool)
Normally there are three pools 0, 1 and spare
  • 0 = normal pool
  • 1 = mirror pool (if syncMirror is enabled)
spare = spares disks that be used for growth and replacement of failed disks

Plex

When a aggregate is mirrored it will have two plexes, when thinking of plexes think of mirroring. A mirrored aggregated can be split into two plexes.

Volume (Flexible)

This is more or like a traditional volume in other LVM's, it is a logical space within an aggregate that will contain the actual data, it can be grown or shrunk as needed

WAFL

Write anywhere filesystem layout is the filesystem used, it uses inodes just like Unix. Disks are not formatted they are zeroed.
By default WAFL reserves 10% of a disk space (unreclaimable)

LUN

The Logical Unit Number is what is present to the host to allow access to the volume.

SNAPSHOT
A frozen read-only image of a volume or aggregate that reflects the state of the new file system at the time the snapshot was created, snapshot features are
  • Up to 255 snapshots per volume
  • can be scheduled
  • Maximum space occupied can be specified (default 20%)
File permissions are handled




Please write your valuable comments, About This Blogspot.

Wednesday 5 October 2011

NetApp Terminology - Disks


NetApp currently uses 3 types of disks:

FCP (Fiber) – fast, expensive, on all models, originally in filers
SATA (Serial ATA) – slower, cheaper, on all models, originally on nearstores
SAS (Serial Attached SCSI) – fast, expensive, currently only on FAS20x0 series, poised to replace FCP in the long run

-Now:

Recent models can combine FC, SATA, & SAS disks
SATA is slower than FCP & SAS
Note: “FCAL = Fiber Channel – Arbitrated Loop”
A fast, serial-based standard meant to replace the parallel SCSI standard
Primarily used to connect storage devices to servers




NetApp


Network Appliance (NetApp)

This section is short introduction into Network Appliance (NetApp).

History

NetApp was created in 1992 by David Hitz, James Lau and Michael Malcolm, the company become public in 1995 and grew rapidly in the dot com boom, the companies headquarters are in SunnyvaleCaliforniaUS. NetApp has acquired a number of companies that helped in development of various products. The first NetApp network appliance shipped in 1993 known as a filer, this product was a new beginning in data storage architecture, the device did one task and it did it extremely well. NetApp made sure that the device was fully compatible to use industry standard hardware rather than specialized hardware. Today's NetApp products cater for small, medium and large size corporations and can be found in many blue-chip companies.


NetApp Filers can offer the following

•           Supports SAN, NAS, FC, SATA, iSCSI, FCoE and Ethernet all on the same platform
•           Supports either SATA, FC and SAS disk drives
•           Supports block protocols such as ISCSI, Fibre Channel and AoE
•           Supports file protocols such as NFS, CIFS , FTP, TFTP and HTTP
•           High availability
•           Easy Management
•           Scalable

NetApp Filer

The NetApp Filer also know as NetApp Fabric-Attached Storage (FAS), is a data storage device, it can act as a SAN or as a NAS, it serves storage over a network using either file-based or block-based protocols

File-Based Protocol     :  NFS, CIFS, FTP, TFTP, HTTP

Block-Based Protocol :  Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)

The most common NetAPP configuration consists of a filer (also known as a controller or head node) and disk enclosures (also known as shelves), the disk enclosures are connected by FC or parallel/serial ATA, the filer is then accessed by other Linux, Unix or Window servers via a network (Ethernet or FC). An example setup would be like the one in the diagram below

Filer Back view

The filers run NetApp's own adapted operating system (based on FreeBSD) called Data ONTAP, it is highly tuned for storage-serving purposes.

All filers have a battery-backed NVRAM, which allows them to commit writes to stable storage quickly, without waiting on the disks.

It is also possible to cluster filers to create a highly-availability cluster with a private high-speed link using either FC or InfiniBand, clusters can then be grouped together under a single namespace when running in the cluster mode of the Data ONTAP 8 operating system.

The filer will be either Intel or AMD processor based computer using PCI, each filer will have a battery-backed NVRAM adaptor to log all writes for performance and to replay in the event of a server crash. The Data ONTAP operating system implements a single proprietary file-system called WAFL (Write Anywhere File Layout).

WAFL is not a filesystem in the traditional sense, but a file layout that supports very large high-performance RAID arrays (up to 100TB), it provides mechanisms that enable a variety of filesystems and technologies that want to access disk blocks. WAFL also offers

  • snapshots (up to 255 per volume can be made)
  • snapmirror (disk replication)
  • syncmirror (mirror RAID arrays for extra resilience, can be mirrored up to 100km away)
  • snaplock (Write once read many, data cannot be deleted until its retention period has been reached)
  • read-only copies of the file system
  • read-write snapshots called FlexClone
  • ACL's
NetApp Backups

The last point to touch on is backups, NetApp offers two types

Dump
  • backs up files and directories 
  • supports level-0, incremental and differential backups 
  • supports single file restore 
  • capable of backing only the base snapshot copy 

SMTape 

  • Backs up blocks of data to tape 
  • Supports only level-0 backup 
  • does not support single file restore 
  • capable of backing up multiple snapshot copies in a volume 
  • does not support remote tape backups and restores 

The filer will support either SCSI or Fibre channel (FC) tape drives and can have a maximum of 64 mixed tape devices attached to a single storage system.

Network Data Management Protocol (NDMP) is a standardized protocol for controlling backup, recovery and other transfers of data between primary and secondary storage devices such as storage systems and tape libraries. This removes the need for transporting the data through the backup server itself, thus enhancing speed and removing load from the backup server. By enabling NDMP support you enable that storage system to carry communications with the NDMP-enabled commercial network-attached backup application, it also provides low-level control of tape devices and medium changers. The advantages of NDMP are 
  • provide sophisticated scheduling of data protection across multiple storage systems
  • provide media management and tape inventory management services to eliminate tape handling during data protection operations
  • support data cataloging services that simplify the process of locating specific recovery data
  • supports multiple topology configurations, allowing sharing of secondary storage (tape library) resources through the use of three-way network data connections
  • supports security features to prevent or monitoring unauthorized use of NDMP connections

Tuesday 4 October 2011

WAFL - Ontap file system

Write Anywhere File Layout

  • WAFL : Write Anywhere File Layout.
  • This file system for Data ONTAP operating system.
  • WAFL write the data in next available block.
  • WAFL taking snapshot frequently the root volume, Because in the time of Improper shutdown, WAFL restore the last Snapshot when filer rebooting.
  • Internally called a Consistency Point