Thursday, 4 July 2013

what is Netapp raid group and how its working..?

These are type of disks in filler


DISK TYPE
RAID-DP
RAID4
ATA/BSAS/SATA
                       Maximum
16
7
FC/SAS
                        Maximum
28
14
ATA/BSAS/SATA
                            Default
14
4
FC/SAS
                         Default
16
7
ATA/BSAS/SATA
                        Minimum
3
2
FC/SAS
                      Minimum
3
2


Only Example for RAID DP: 

> aggr creare aggr1 –t raid_dp –r 7 17
-t : for selecting raid type (raid4 or raid DP)
-r  : for select (how many disk want to be in one raid group)
17 meant for  :  17 disk for aggr1

Consider you are using SATA Disk with RAID DP,
In SATA Disk
16 disks is the maximum in one raid group,
7 disks is default in one raid group and

 3 Disks is Minimum to create NETAPP Raid group.
This is the recommended in Netapp.

After this command executed 3 raid groups will be created cause we mentioned ( –r 7 in command)so it will take 7 disk for one raid group(rg).

17 disk - 7 disk = 10 disk  -----> rg0
10 -7 = 3  -----> rg1
3- 3 = 0 -----> rg2 (here we having 3 balance disks its less than 7 disks, so it will calculate accordingly and create another raid group as rg2 with 3 disks, cause In RAID DP minimum is 3 disk )

Another example in RAID DP If balance disk is 3 disk means i will create another raid group that already i explained above, If suppose its  less than 3 disks remains means, two disk or one disk, That  balance disk will  moved to spare disks.


Only Example for RAID4 :

> aggr creare aggr1 –t raid4 –r 5 16
-t : for selecting raid type (raid4 or raid DP)
-r  : for select (how many disk want to be in one raid group)
16 meant for  :  16 disk for aggr1

Consider you are using SATA Disk with RAID 4,
In SATA Disk
16 disks is the maximum in one raid group,
4 disks is default in one raid group and

2 Disks is Minimum to create NETAPP Raid group.
This is the recommended in Netapp.


After this command executed 3 raid groups will be created cause we mentioned ( –r 5 in command)so it will take 5 disk for one raid group(rg).
16 disk - 5 disk = 11 disk  -----> rg0
11 -5 = 6  -----> rg1
6- 5 = 1 -----> rg2

Its  less than 5 disks, so that  balance 1 disk will  moved to spare disks.

So in this example aggr1 having 15 disk..



If Any Question please feel free to ask...


32 bit aggregate to 64 bit aggregate Migration

32 bit aggregate to 64 bit aggregate Migration



Data ONTAP 7-Mode 8.1 and later

Identify 32-bit aggregates and run a check to see if the volumes are sufficient:

  filerA> priv set advanced
  filerA*> aggr status


Perform the upgrade with add disks:

  filerA> priv set advanced
  filerA*> aggr status
  filerA*> aggr add aggr1 -64bit-upgrade normal 12 ( 12 is number of disks)

Perform the upgrade with-out add disks::

  filerA> priv set advanced
  filerA*> aggr status
  filerA*> aggr 64bit-upgrade status aggr1 -all



Another way to 32 to 64 bit


If there is any existing aggregate in 32bit.
just create one aggregate with 64 bit
Create one the Volume in the 64 bit aggregate.
Then migrate all the old volume to the new Volume through NDMP protocol.

Fas > aggr create aggr2 -b 64 15(No of disks)

Fas > aggr status aggr2

Fas > vol create vol2 aggr2 100g

Fas > ndmpd on

Fas > ndmpd –l 0 –f /vol/vol1 /vol/vol2



Please write your valuable comments, about my blogspot.

Tuesday, 18 June 2013

BMC - RLM - SP


  1. BMC - RLM - SP is console port to connect the filer,
  2. BMC - RLM - SP is a separate device in the FAS chassis.
  3. It has it's own IP and is independent of the FAS so it keeps running any time.
  4. BMC - RLM - SP is connects to the motherboard and install separately.When a FAS fails over BMC - RLM - SP keeps working locally.
  5. These provide remote support like remote access, monitoring and troubleshooting.




Before FAS 2xxx they called BMC

FAS 2xxx to FAS 30xx / 31xx they called RLM

After FAS 3240 and 3270  they named SP (service processor)




BMC : Baseboard Management Controller

RLM : Remote Lan Module

SP : Service Processor



RLM - Remote Lan Module






Please write your valuable comments, About this Blogspot.

Tuesday, 4 June 2013

CIFS Oplocks - Opportunistic Locks

CIFS Oplocks - Opportunistic Locks

CIFS Oplocks

  • CIFS oplocks reduce network traffic and increase the storage performance.
  • Its working like caching of read-ahead, write-behind, and lock information.
  • you can enable/disable CIFS oplocks for the individual volume or qtree.
  • In the database application we should turn off the CIFS oplocks 
  • when you handling critical data and can't afford the data loss.
  • Otherwise, you can on  CIFS oplocks options.

Enabling/Disabling for entire storage 

> cifs.oplocks.enable on
> cifs.oplocks.enable off


Enabling/Disabling for qtrees qtree oplocks /vol/vol2/proj enable

> qtree oplocks /vol/vol1/qtree enable
> qtree oplocks /vol/vol1/qtree disable









Please write your valuable comments, About this Blogspot.