I'm no expert, I'm just getting my feet wet with SQL myself, but one of the firs things I decided was to not set the min and max the same!
That's purely for the buffer work area. You need to allow RAM for the server OS, and other processes. SQL will take RAM beyond the max limit you set for other events and processes. I'd not set the minimum up because it's not going to cooperate with the server OS as nicely if more memory is needed elsewhere.
I would set the max for the buffer, which is all you are setting, but I'd leave the minimum at 0. SQL will take it as it needs it. It will take RAM up to the limit you set for buffers, and it will never release it. But SQL and the OS need ram outside of that as well.
We have 12 gig on our server - two SQL instances, instance 1 and instance 2 I'll call them.
Instance 1 is for our in-house application. Instance 2 is for SEPM as well as some other apps - Solar Winds network utilities, our logging server, etc.
I have instance 1 set to just under 4 gig, instance 2 set to 4 gig, leaving 4 gig for the OS and other SQL processes and services, the complex things it needs to do. Things have settled down a lot.
I have a buffer cache hit ratio of 100% or just under 100% on both instances - this means requests for data by our users (and me!) is coming from the buffer and not the disks (SAN in our case)
SQL does cooperate with the OS and if the OS is choking, will release memory, but then SQL processes can slow down, too - but the buffer area is owned by SQL.
What SQL does with that buffer - the memory space you have set, is to buffer or cache the database requests. Ideally, if you check the buffer cache hit ratio, you want over 90%. This means that of all the requests for information and suff from SQL, the user gets over 90% from the buffers and SQL doesn't have to read the disks and look for it. If you have a 4 gig database and set the memory at 4 gig, SQL will ultimately cache the entire database, everything requested would first come off the disk, but any request for that SAME information would be pulled from that buffer area for other people. But it needs other RAM as well for other operations. If there's a complex chore it needs to run, it builds the procedure or steps to handle that task, and will keep that in memory if there's enough memory.
Assume a server of 16 gig, SQL set to min and max of 14. It's got that and it won't let go. So what is the OS using, what are the other processes using, what is the console using if you run the console directly on the server?
I'd set the buffer min back to 0, I'd set the max down to the point your buffer cache hit ratio starts to drop to the lower 90s and then leave it there, or low enough to leave memory for the OS and other SQL operations and not worry about that memory setting again unless you add more memory to the server.
At least that's how I understand it...........