It was during this year VISION conference in Las Vegas when I got a very interesting question from Mike. He was running my Flexible Storage Sharing (FSS) lab and he asked me: “So Carlos, with FSS I can use internal HDDs and provide a highly available service, commoditizing the HW and avoiding any SAN need”. Then I started talking about the work I have being doing in the lab for the last months. And it was when I realized I had to start writing and publishing about that work, so here we go.
During FSS deployment we all were very excited about the capabilities to bring any application working close to the CPU, especially when using internal SSDs. We worked very closely with Intel and we published the white paper Remove the Rust: Unlock DAS and go SAN-Free. This white paper described how I could increase by four times the performance of a database. But what happens if I do not want to increase performance but just commoditize using internal storage and reduce my Total Cost of Ownership?
A few months back we got some new servers in the lab with 25 internal HDDs and one Flash Card inside the server. With FSS I could build an environment providing high availability for any service running on the top. Therefore my first step was to create a basic configuration where I could create something similar to what we did in the white paper mentioned above.
The difference here is that I am going to be using internal HDDs for both data and redo logs. To accelerate the performance, an internal flash card will be used with SmartIO. I want to be very clear here, and the next comparison may not be fair, because in the SAN environment I had different servers, I had almost 6 times more SGA but also a bigger database (1.4TB versus 700GB).
But my goal was to see what would be the performance I could get from this new environment and compare it with something very similar we had done previously. The results are very interesting. In the SAN environment I was able to get around 81K transactions per minute. And just to be clear again, that was the best performance I got for many runs, where it was difficult to get the same performance twice. Our IT guys told me many times: “Carlos, you are not the only one using the SAN!”, while with my new environment I have been able to get 77K transactions per minute run after run.
The interesting thing is that this new architecture is 60% cheaper (including hosts, storage, interconnects, SW licenses) than the SAN one. My next step was to grow it to three node cluster, where I can use my spare disks to have one database instance running in each of the servers. I will be describing that in the next article.