If the size of the heap data segment is limited then for large process we may have a core dump because of the way the memory management is implemented in AIX.
Following are the quick solution to try out :-
1. ulimit -d unlimited ------ should allow larger process.
2. Increase the LDR_CNTRL=MAXDATA setting at the AIX level prior to executing the command.
export LDR_CNTRL=MAXDATA=0x20000000 ( Default on AIX is 0x10000000)
To make the setting permanent, place the LDR_CNTRL=MAXDATA=0x20000000 to the end of the/etc/environment file.
All must be set in the environment of the netbackup processes at startup.
3. NBU since 6.0.MP7 takes care of such issues. Because later the way the binaries were built was changed to take into consideration such scenarios. Making the changes in the /etc/environment will make the changes for the entire system.
On AIX, the 32-bit address space (= 232 or 4 GB) is divided into 16 segments, each 256 MB (256 MB*16 = 4 GB). When a 32-bit process is loaded in memory, its address space looks like
Exec Program 1
Private Read-Write 2
Currently Addressable Files and Other segments 3
shared Libraries D
Shared Libraries Data F
In the default address-space model, segment 2 contains the user stack, kernel stack and u-block, as well as data (area used by malloc ()). This translates to a limit of 256 MB for data, stacks, and u-block for the whole process. Segments 3 through C are free to be used as shared memory, using shmat () or mmap (). If a process needs to allocate more data, it can switch to the large space model, where the native heap starts from segment 3 onwards, and grows to an offset specified by the MAXDATA value in the XCOFF header to a non-zero value. So as you increase the MAXDATA value the size of the data segment goes on increasing where as the space allocated for the shared component is reduced and vice versa.