As hyped as the cloud is today, it may become even hyper, with the introduction of two (relatively) new technologies: 10Gb network connectivity and Solid State Drives, known as SSD.
Hardware has immensely improved in the last few years. A key development in the field is more CPUs/cores per server, providing vast processing power to today’s standard servers. This is amplified with the latest Nehalem architecture from Intel, which enables a much higher level of parallelism due to its new cache memory re-architecture and more cores in a CPU. Standard I/O capabilities have also greatly improved, particularly with high bandwidth and fairly low cost solutions like SATA. In addition, hard drives have also grown significantly in capacity and can now provide terabytes of storage on a single drive. Lastly, energy efficiency has also improved considerably, with lower power consumption across the board.
These key improvements in hardware, which enable users to get even more from their infrastructure, have also found their way into the cloud – where they are used effectively to power this environment.
At the current state, cloud infrastructure has two main bottlenecks that significantly impair cloud services, especially distributed services:
1. Network Bandwidth:
A 1Gb network is commonly used in most interfaces today. Even if multiple network interface devices are used on a single server, their combined bandwidth is still far lower than desired. The relatively low bandwidth in and out of the server prevents the optimal utilization of the server’s capacity. After all, most applications are I/O intensive and not computation intensive. For example, in a 4 cores Intel CPU the throughput can easily reach 40Gb/sec, yet this power cannot be demonstrated on top of 1Gb networks. (I highly recommend reading this paper from Intel, on a software-switch prototype demonstrating how fast CPUs are today.)
To unleash the power of the server, clouds must provide multiple 10Gb connectivity as a standard. Cloud applications by nature require extensive network use. Once the STANDARD shifts from 1Gb to 10Gb network interface, the cloud as a whole will enjoy better connectivity and will be better equipped to deliver on its major promises.
The following illustration shows how CPU and memory are utilized depending on network connectivity. To clarify, this assumes I/O is done through the network, such as when using Amazon’s EBS for example.
The illustration shows a 1Gb network connectivity fully utilized for networking and I/O, while CPU and memory bandwidth remain under-utilized. This results in unbalanced server utilization, where the CPU and memory are under-utilized and the networking and I/O are over-utilized.
A 10Gb network connectivity fully utilized for networking and I/O, brings CPU and memory bandwidth to 100% utilization – A well-balanced server can easily utilize the full potential of all of its components.
Another major pain in the cloud is large-scale persistent storage and the way it deals with random access patterns, or in short – hard drives…
Those old beasts are not suitable to serve one of the most common data use-cases – a database.
Despite the improvements in hardware, hard-drives for the most part still exhibit their inherent limitations of seek time and read/write serialization, resulting in poor support for random access. Hard drives have zero level of parallelism, and they are highly limited by their physical mechanical structure, which forces a delay of ~10ms on every move of their heads across the magnetic plates. It is less of a limitation when serving use cases like video streaming (as those make good use of hard drive high capacity and high throughput in sequential read patterns), but prove inefficient for heavy read/write operations that are often required by databases.
SSD is an evolving technology that resolves the random access pattern issues, while still providing large-scale storage capacity and long term data persistency. SSD is built on “electronics only”, with no mechanical parts that cause delays. SSD is a perfect solution for databases, despite the fact that today it is much more expensive than standard hard drives on a per GB price (price ratio today is about 1 to 5). However, when considering the overall cost and value of this technology, and its effect on server utilization and performance, this price may not be as high. Another significant advantage is the fact that there is no need to make any changes in software to benefit from SSD technology, as it uses the same interfaces as standard hard drives, namely, SATA interface.
The effective throughput of SSD for random data access can reach 30-50MB/sec. Contrast that with a standard hard drive’s effective throughput of 100K/sec (when trying to read 1K records from random locations on the drive). While SSD keeps improving, hard drives have made very little improvement in the last 20 years.
The following illustration shows how CPU and memory are utilized when standard hard drives are used, compared to SSD.
Ten hard drives at full throttle (random access) barely scratch the performance of a server.
Ten SSDs bring server utilization to a very reasonable area, leaving space for additional tasks such as web apps to run effectively on the same server.
When implementing these two great improvements, the cloud can then be truly exploited in an effective and efficient manner. Their impact on key critical components in the software funnel – such as distributed cloud databases – is immense.
Bottom line – 10Gb network and SSDs are an absolute necessity in an advanced cloud environment, leading to improved resource utilization and further adoption of the cloud.
10Gb networks and SSDs are hopefully coming soon to a cloud service provider near you!