Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Netting Out NetBackup

NetBackup 5200/5220 Appliances 2.5.2 is now available!

Created: 01 Mar 2013 • 9 comments
CRZ's picture
+1 1 Vote
Login to vote

I'm very pleased to announce that the next Release Update for NetBackup 52x0 Appliances is now available!

NetBackup 5200/5220 Appliances 2.5.2 Release Update is the equivalent Appliance patch release to NetBackup 7.5.0.5.

It is a cumulative release containing fixes and content from 7.5.0.1 through 7.5.0.4. In addition, this release contains an additional 400 fixes (bringing the total fixes in 7.5.0.5 to over 1100!) including the most commonly downloaded EEBs, several customer escalations, and internal engineering defects.

In addition to including all of the fixes in NetBackup 7.5.0.5, Appliance 2.5.2 contains:  

  • Improvements in Hardware Supportability
    • More resource status & usage monitoring
    • Way to acknowledge alerts
    • Monitor behavioral patterns in MSDP
    • Bug fixes/EEBs specific to SORT/Call home

Information about 2.5.2 and download links are available here:

NetBackup 5200/5220 2.5.2 Update
 http://symantec.com/docs/TECH202301

The 2.5.1 Update can only be applied to an Appliance already running at version 2.5 or 2.5.1 (including 2.5B and 2.5.1B).

To check to see if your particular Etrack is resolved in 2.5.2 (NetBackup 7.5.0.5), please refer to both sets of Release Notes:

NetBackup 7.5.0.5 Release Notes
 http://symantec.com/docs/DOC6038

NetBackup Appliance 2.5.2 Release Notes for NetBackup 52xx
 http://symantec.com/docs/DOC6161

Comments 9 CommentsJump to latest comment

Ksmith169's picture

Hi CRZ,

Installing this upgrade now on 5220 running 2.5.1b. I will let all know how it goes.

K.

0
Login to vote
Andrew Madsen's picture

I installed it (with a little help from Symantec) on 2.5.1GA because of a couple failed EEB installs they had to do some script editing to get the install.

Since then I have been getting High Disk IO error alerts  even while nothing is going on. I have contacted our RPS about this.

The above comments are not to be construed as an official stance of the company I work for; hell half the time they are not even an official stance for me.

0
Login to vote
Ksmith169's picture

Hi Wolfsbane,

Ouch on the high disk IO error alerts. Installed no problem for me on 2.5.1b (appliance release). It solved my Linux clients BMR issue. I had issue where no BMR backups would not run for linux clients. This upgrade solved it.

K.

0
Login to vote
MarcoV@NL's picture

We're getting these alerts as well. They are caused by the RAID Controller cache battery tests that seems to have kicked off by the v2.5.2 software.

If you check with

/opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -aALL

you'll see that the battery status is either discharging or charging due to Learn Cycle Requested.
Due to the fact that the battery status is not 'healthy' the RAID controller disables the write cache, which causes the heavy disk I/O (Every write is being performed physically to the disks, causing some write performance decrease and fairly heavy utilized disks).

Just sit tide for 24 hours. After this time everything should be fine.

Marco V.

0
Login to vote
Andrew Madsen's picture

I have been getting them for five days

BBU status for Adapter: 0

BatteryType: iBBU
Voltage: 4029 mV
Current: 0 mA
Temperature: 29 C

BBU Firmware Status:

  Charging Status              : None
  Voltage                      : OK
  Temperature                  : OK
  Learn Cycle Requested        : No
  Learn Cycle Active           : No
  Learn Cycle Status           : OK
  Learn Cycle Timeout          : No
  I2c Errors Detected          : No
  Battery Pack Missing         : No
  Battery Replacement required : No
  Remaining Capacity Low       : No
  Periodic Learn Required      : No
  Transparent Learn            : No

Battery state:

GasGuageStatus:
  Fully Discharged        : No
  Fully Charged           : Yes
  Discharging             : Yes
  Initialized             : Yes
  Remaining Time Alarm    : No
  Remaining Capacity Alarm: No
  Discharge Terminated    : No
  Over Temperature        : No
  Charging Terminated     : No
  Over Charged            : No

Relative State of Charge: 99 %
Charger System State: 49168
Charger System Ctrl: 0
Charging current: 0 mA
Absolute state of charge: 82 %
Max Error: 2 %
Adapter 1: Get BBU Status Failed.

Exit Code: 0x01

Everything looks OK

The above comments are not to be construed as an official stance of the company I work for; hell half the time they are not even an official stance for me.

0
Login to vote
MarcoV@NL's picture

Well,

We have 6 appliances running and since this morning the appliances stopped screaming.
I also double checked the current controller cache settings and everything went back to normal.
And NO, we haven't overruled that cache settings manually!!!

To conclude, for our 6 appliances it took approx. 41 hours to settle back to normal behavior.
Find below the current status of the Raid Controllers Cache settings;

APP-01:/home/maintenance # /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -L0 -a0
                                     
 
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-6, Secondary-0, RAID Level Qualifier-3
Size                : 35.469 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 15
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disabled
Encryption Type     : None
 
 
 
Exit Code: 0x00
APP-01:/home/maintenance # /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -L0 -a1
                                     
 
Adapter 1 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-6, Secondary-0, RAID Level Qualifier-3
Size                : 4.541 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 7
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disabled
Encryption Type     : None
 
 
Best regards,
        Marco
0
Login to vote
MarcoV@NL's picture

All,

Sorry for my 'not-so-complete' analysis.
I've tackled the problem now completely. After the update to v2.5.2 Symantec decided to disable
the controller cache for Adapter 2, the internal disks Raid Controller.

# /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -L0 -a2
                                     
 
Adapter 2 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :MegaSR   R1 #0
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 930.390 GB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disabled
Encryption Type     : None
 
I think during the upgrade as a security measure they disabled the write cache, to make sure that all changes are instantly being commited to disk. But after the upgrade they forgot the enable it again.
To fix it enable it by hand;
 
# /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp WB -LALL -aALL
                                     
Set Write Policy to WriteBack on Adapter 0, VD 0 (target id: 0) success
Set Write Policy to WriteBack on Adapter 1, VD 0 (target id: 0) success
Adapter 2: Get BBU Status Failed.
 
So policy Change to WB will not come into effect immediately
 
Set Write Policy to WriteBack on Adapter 2, VD 0 (target id: 0) success
 
Exit Code: 0x00
# /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -L0 -a2
                                     
 
Adapter 2 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :MegaSR   R1 #0
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 930.390 GB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Enabled
Encryption Type     : None
 
 
 
Exit Code: 0x00
#

This definitely solved it. Due to the fact that since this morning the intensity of those mail really dropped from once every 15 minutes to intermitted once or twice every two hours, I reacted a bit too premature..

One might expect Symantec releasing a Technote for this... ;-)

Also do not forget to Acknowledge the existing Alert (Settings -> Alert AcknowledgeErrors)... ;-)

Best regards,

        Marco

+1
Login to vote
Andrew Madsen's picture

Our alerts have stopped as well. You appear to have a 24TB 5220 and therefore have different controllers. Mine is a 4TB unit and we look like this:

/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -L0 -aALL
                                    

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-6, Secondary-0, RAID Level Qualifier-3
Size                : 4.541 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 7
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disabled
Encryption Type     : None

Adapter 1 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 930.390 GB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Enabled
Encryption Type     : None

Exit Code: 0x00

This is from the only machine I upgraded. The others show this:

/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -L0 -aALL
                                    

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-6, Secondary-0, RAID Level Qualifier-3
Size                : 4.541 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 7
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disabled
Encryption Type     : None

Adapter 1 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 930.390 GB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Enabled
Encryption Type     : None

Exit Code: 0x00

Which is essentially the same. I do not believe the data drive array is supposed to have the write cache enabled. I have put that to Symantec to validate.

The above comments are not to be construed as an official stance of the company I work for; hell half the time they are not even an official stance for me.

0
Login to vote
Umair Hussain's picture

EEB for high disk I/O alerts on 2.5.2 has been released under ET3116615...

0
Login to vote