How to calculate the pinned memory usage?

Article:TECH140095  |  Created: 2010-09-16  |  Updated: 2014-03-30  |  Article URL http://www.symantec.com/docs/TECH140095
Article Type
Technical Solution


Environment

Issue



AIX systems got rebooted due to memory resources running out of.


Error



System rebooting.


Environment



Applicable to all AIX systems

 


Cause



The backup needs to access the mtime (modification time) for each file to select files for the backup operation. Since mtime is stored in the inode of each file, each inode needs to be read into memory which would cause the inode cache, as defined by vxfs_ninode, to fill up.

During backup, the geometry of the file on disk would also be required by NBU and, internally, VxFS would need to read the bitmap of files, which would increase our buffer cache usage.

VxFS uses pinned memory for inode cache, buffer cache and other kernel data structures like locks etc.

The amount of maximum memory that could be used for inode cache and buffer cache is based on the system configuration[size of physical memory and number of CPUs].

Solution



 

[ How to tune the vxfs_ninode and vx_bc_bufhwm parameters ]

 

By default, AIX allows up to 80% of the physical memory to be pinned.

 

A configuration that commonly uses a much smaller foot print of files will notice a significant increase in cache sizes during backup, especially if the backup includes many 1000's of files that would not normally be accessed.

Our VxFS default inode and buffer caches max limits may be too high for this configuration/workload and should be monitored and tuned downward to prevent VxFS caches consuming larger amounts of pinned_heap unnecessarily during a backup.

 

1. Monitor the VxFS inode and the buffer cache during peak normal workload (not during backup) using vxfsstat,

#vxfsstat

 

2. and then reduce both their sizes using the tunables below.

The vxfs_ninode and vx_bc_bufhwm can be tuned down to reduce the cache maximum sizes so that a backup will not use too large an amount of pinned heap.

 

3. Collect the buffer cache and inode cache usage using vxfsstat and also vmstat at the same time.

 

1) Initially, collect the following

#vmo -L > /tmp/vmo_L.output

#echo "xm –u"|kdb > /tmp/xm_u.output

 

2)  monitor inode and buffer cache usage with vxfsstat -bi.

- vxfsstat –bi outfile -t 30 <<Mount point >>

- vxfsstat -w outfile -t 30 <<Mount point>> and read this file using the vxfsstat -r <path> and redirect it to any file.

- vxfsstat -ap -t 30  <<Mount point>> > vxfsstat_ap.out&

- vmstat -v with the output of date and the same sampling.

 

 

4. To display the current values of all VMM tunable parameters

#vmo -L

maxpin                    773915        773915                                 S

--------------------------------------------------------------------------------

maxpin%                   80     80     80     1      100    % memory          D          <<< Check that maxpin is set to 80%, if set lower,

     innable_frames                                                                     Ask why its set lower and advise it be reset to its default of 80%

 

 

# vmo -L|egrep 'lru_file_repage|maxclient|maxperm|maxpin|minperm|page_steal_method'

[aix2:/]vmo -L|egrep 'lru_file_repage|maxclient|maxperm|maxpin|minperm|page_steal_method'     

maxperm                   818943        818943                                 S

maxpin                    773915        773915                                 S

maxpin%                   80     80     80     1      100    % memory          D

minperm                   27298         27298                                  S

minperm%                  3      3      3      1      100    % memory          D

 

 

5. How to calculate how many the vxfs use Kernel heap memory using kdb.

We will clarify the values of vxfs_ninode and vx_bc_bufhwm from the dump

And need to collect the crash dump with mods enabling and ‘echo “xm –u”|kdb’

(0)> xm -lu
Kernel heap/pinned heap usage:
Storage area................... 30043000..40000000
............(268161024 bytes, 65469 pages)
Primary heap allocated size.... 78969872 (77119 Kbytes)
Alternate heap allocated size.. 177140096 (172988 Kbytes)
Pinned heap per-cpu free lists
Kernel heap per-cpu free lists
CPU 0 list 03 3E502F00 3E070D00 3E070A80 3CCF7680
CPU 4 list 03 3E502500 3E070900 3E070580 3E070280
CPU 6 list 03 3E502480 3E070500 3E070400 3E070680
............. 3E070700

Overflow heap usage:
Storage area................... 02501000..10000000
............(229634048 bytes, 56063 pages)
Primary heap allocated size.... 18176896 (17750 Kbytes)
Alternate heap allocated size.. 149211136 (145714 Kbytes)
Pinned heap per-cpu free lists
Kernel heap per-cpu free lists


Segment Size InusePrim InuseAlt Available
268161024 bytes - 78969872 - 177140096 = 12051056 (11 MB)
229634048 bytes - 18176896 - 149211136 = 62246016 (59 MB)

total pinned heap (both segs) = 268161024 + 229634048 = 475 MB
pinned heap in use (both segs) = 256109968(
ç78969872+177140096) + 167388032(ç18176896+149211136) = 404 MB
remaining heap (both segs) = 12051056 + 62246016 = ~70 MB


(0)> dw vx_pinned_bytes 1
vx_pinned_bytes+000000: 0472D9A8
(0)> dw vx_cur_inodes
vx_cur_inodes+000000: 00000836 (2102 ==> ~ 4 MB)
(0)> dw vxfs_ninode
vxfs_ninode+000000: 00005D62
(0)> dw vx_bc_bufhwm
vx_bc_bufhwm+000000: 00040000
(0)> dw vx_vmm_buf_count
vx_vmm_buf_count+000000: 00001000
(0)> dw vx_num_pdts
vx_num_pdts+000000: 00000008
(0)> vfs | grep VXFS | wc -l
6

(0)> dw vx_bc_altseg_enabled 1
vx_bc_altseg_enabled+000000: 00000001
(0)> dw vx_bc_altseg_inited 1
vx_bc_altseg_inited+000000: 00000001

(0)> dw vx_pinned_bytes 1
vx_pinned_bytes+000000: 0472D9A8

(0)> hcal 472D9A8

Value hexa: 0472D9A8          Value decimal: 74635688

 

(0)> dcal 74635688 / 1024 /1024

Value decimal: 71          Value hexa: 00000047


vx_pinned_bytes = ~71 MB
So VxFS driver is using a total of 71 MB of memory.

 

 

5. How to calculate the recommending parameter value?

 

1) grep Kbyte vxfsstatb_app.out | sort +0 -0 |pg

    53760 Kbyte current   13919488 maximum

    53760 Kbyte current   13919488 maximum

    ...

    86144 Kbyte current   13919488 maximum

    86272 Kbyte current   13919488 maximum

 

2) grep 'inodes current' vxfsstatb_app.out | sort +0 -0 |pg

     1368 inodes current     25875 peak              1251660 maximum

     1368 inodes current     25875 peak              1251660 maximum

     ...

    19332 inodes current     25875 peak              1251660 maximum

    19368 inodes current     25875 peak              1251660 maximum

 

So, It is recommended to reduced them both by the same percentage, to play safe.

If there are no intention of performing a backup on this system,  it can be reduce further.

 

No backup:

-      Reduce both by 90%

-      So vxfs_ninode will be        125166

-      And vx_bc_bufhwm will be      1391948

 




Article URL http://www.symantec.com/docs/TECH140095


Terms of use for this information are found in Legal Notices