The cloud has many virtues, but perhaps its biggest is scale. Scale refers to the ability to throttle resources up or down to meet inbound demand on web applications or other infrastructure. It’s a problem that most developers, whether at startups or Fortune 500 behemoths, can only dream of having. But for the ill-prepared, scaling an application can be a nightmare.
Which is why Amazon Web Services have become such essential infrastructure for startups and enterprises alike. As Ryan Park, operations and infrastructure leader at Pinterest, declared earlier this month at AWS Summit:
Imagine if we were running out own data centre, and we had to go through a process of capacity planning, and ordering hardware, and racking that hardware, and so on. It just would not have been possible scale fast enough – especially with such a small team. Until about a month ago, I was the only operations engineer at the whole company.
Think about that for a minute. Here’s a web service with nearly 18 million visitors in February alone, which took the company just nine months to reach. In a pre-AWS world, Pinterest would have employed scores of operations engineers to buy and manage hardware and the software to stitch it all together. No more.
Of course, managing the infrastructure is only part of the equation. Perhaps even more important is managing all the data that today’s businesses increasingly collect. The lingua franca of this Big Data movement is Hadoop, which enables companies to crunch through massive piles of data to find actionable insights into how better to run one’s business. Hadoop’s importance in our data-hungry world is perhaps best articulated by Cloudera CEO (and Nodeable board member) Mike Olson:
In the old days if you had a data problem you would write a big check for a massive piece of hardware and with any money left over you would by some very expensive but powerful software. That box with software and data became your data temple and your analysis and conclusions were done there.
There are problems with that approach. Data are now growing so fast. It is now impossible for one box to have all your data. You must have your data across multiple servers and use software that can coordinate and operate across all those servers. Hadoop is the platform designed to do this. It is designed to solve the problems of today, not the problems of yesterday.
Critically, Hadoop, too, is increasingly a matter of the cloud and, particularly, of AWS. By some estimates Hadoop jobs comprise the majority of all AWS processing. With petabyte-scale data clusters increasingly common, shifting that burden of storage and processing to the cloud becomes essential.
As more infrastructure and data processing moves to AWS, it becomes more and more important to analyze your AWS instances to track real-time trends (“Is my CPU running hot?”), make comparisons (“We’re running memory 25 percent lower than most companies – should we look to optimize?”), discover anomalies, and so on. That’s where Nodeable comes in.
It used to be that the cloud is where enterprises dumped non-critical applications. Now the inverse is true. The cloud is the hub for mission-critical data processing. It’s where enterprises are running applications that need serious scale. And it just so happens to be where Nodeable shines. Nodeable surfaces actionable insights into an easy-to-grok, Twitter-like activity stream. Search tools like Splunk are nice, but Nodeable prefers to reveal those insights while you sip your tea or watch your daughter’s soccer game.
Why not sign up for our beta and give it a try?