Video Screencast Help
Symantec Appoints Michael A. Brown CEO. Learn more.

SQL Server Memory usage too low

Created: 27 Mar 2013 | 3 comments
andykn101's picture

We're running CMS 7.1 SP2 using a separate SQL Server 2008R2 server running on Windows 2008 R2 with only one other tiny DB on it. The SQL Server has 16GB RAM with SQL set to Min and Max 14000 MB.

Somehow the Symantec_CMDB data file had been set to restrict growth to 47GB, when it it that limit last weekend it didn't seem to like it much. Since then we have removed the limit but the sqlservr.exe process in Task Manager won't go above about 100kb, even after a reboot. All other systems I've seen will slowly grow memory usage until the limit is reached.

So my console is now slow, random deadlocks happen all the time, clients keep getiing “RequestPolicies failed” and Quick Apply drop downs often time out with “the data could not be loaded "

Is there some other obscure SQL setting that may have been set that restricts the memory usage? AWE is not set but I understand that this isn't required for 64 bit Windows OS, the setting isn't in SQL Server 2012 at all apparently.

Operating Systems:

Comments 3 CommentsJump to latest comment

luke.s's picture

Could you please execute the following query and post the results here?

 

use master
EXEC sp_configure 'show advanced options', 1
 
Regards,
 
Fábio Sanches

If the suggestion has helped to solve your problem, please mark the post as a solution.

Fábio Sanches
IT Technical Manager | WTR Services | www.wtrservices.com.br

andykn101's picture

"Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install."

I've got an open call with Symantec Support, they want me to apply SP2 to SQL 2008 R2, they've seen rapidly growing database problems with SP1 but I don't understand why it would start suddenly - we hadn't just applied SP1 when the db started to grow.

Authorised Symantec Consultant (ASC) with Endpoint Management Limited, an Authorised Symantec Delivery Provider based in the UK.

Connect Etiquette: Please "Mark as Solution" posts that fix your problem.

ShadowsPapa's picture

I'm no expert, I'm just getting my feet wet with SQL myself, but one of the firs things I decided was to not set the min and max the same!

That's purely for the buffer work area. You need to allow RAM for the server OS, and other processes. SQL will take RAM beyond the max limit you set for other events and processes. I'd not set the minimum up because it's not going to cooperate with the server OS as nicely if more memory is needed elsewhere.

I would set the max for the buffer, which is all you are setting, but I'd leave the minimum at 0. SQL will take it as it needs it. It will take RAM up to the limit you set for buffers, and it will never release it. But SQL and the OS need ram outside of that as well.

We have 12 gig on our server - two SQL instances, instance 1 and instance 2 I'll call them.
Instance 1 is for our in-house application. Instance 2 is for SEPM as well as some other apps - Solar Winds network utilities, our logging server, etc.
I have instance 1 set to just under 4 gig, instance 2 set to 4 gig, leaving 4 gig for the OS and other SQL processes and services, the complex things it needs to do. Things have settled down a lot.
I have a buffer cache hit ratio of 100% or just under 100% on both instances - this means requests for data by our users (and me!) is coming from the buffer and not the disks (SAN in our case)
SQL does cooperate with the OS and if the OS is choking, will release memory, but then SQL processes can slow down, too - but the buffer area is owned by SQL.

What SQL does with that buffer - the memory space you have set, is to buffer or cache the database requests. Ideally, if you check the buffer cache hit ratio, you want over 90%. This means that of all the requests for information and suff from SQL, the user gets over 90% from the buffers and SQL doesn't have to read the disks and look for it. If you have a 4 gig database and set the memory at 4 gig, SQL will ultimately cache the entire database, everything requested would first come off the disk, but any request for that SAME information would be pulled from that buffer area for other people. But it needs other RAM as well for other operations. If there's a complex chore it needs to run, it builds the procedure or steps to handle that task, and will keep that in memory if there's enough memory.
Assume a server of 16 gig, SQL set to min and max of 14. It's got that and it won't let go. So what is the OS using, what are the other processes using, what is the console using if you run the console directly on the server?
I'd set the buffer min back to 0, I'd set the max down to the point your buffer cache hit ratio starts to drop to the lower 90s and then leave it there, or low enough to leave memory for the OS and other SQL operations and not worry about that memory setting again unless you add more memory to the server.

At least that's how I understand it...........