One of the casualties of the Big Data explosion is that we still have little idea what to do with all these data. In a world that allows us to collect data on click-throughs, purchases, weather patterns, traffic, etc., we can amass so much data that we’re drowning in it.
My friend and venture capitalist Bryce Roberts made this point recently, arguing that “Data, big, medium or small, has no value in and of itself. The value of data is unlocked through context and presentation.” He’s right, and the solution isn’t to graduate and hire a nation of data scientists. (Apparently we have a shortage.)
No, the solution is to make the data speak to average people, using familiar media like Twitter or Facebook.
I view this as similar to what Microsoft did for system administration and development. Microsoft may not have always played fair on its road to the top of the enterprise software heap, but much of its success stems from enabling average IT people and developers to be productive through things like Visual Studio.
Sure, we’re all above average. Or so we think.
But for those few, unlucky souls doomed to be average, well, the Big Data movement needs to lower the bar to entry and productivity.
This is one of the guiding themes at Nodeable. We believe strongly that the DevOps trend is real, and means that a great deal of exceptional programmers are suddenly…below average Operations people. It’s not that they have low IQs. It’s just that doing operations well requires years of experience, just like doing most anything well does. Developers, however, want to focus on building applications. They want 90 percent of their job to be “Dev” and 10 percent to be “Ops.”
So that’s what we do. We tame the “Big Data” in cloud systems to enable developers to visualize and control the complete software development lifecycle.
What to call this? We’re still not sure, but Dan Woods comes close in a recent blog post for Forbes:
Operational Big Data is about automating and streamlining the process of distilling huge amounts of data into a form that can support decision making and process execution….I call these vendors [EMC, IBM, etc.] operational because historically their main value claim is to increase speed of processing not the speed at which analysts work. All of these vendors know that they will succeed more if they can make their products less complex and more agile, and some like EMC Greenplum through its Chorus product are attacking the issue head on. All of these operational vendors support a process of discovery, but usually it is on that is far more intermediated than the hands on experience of using something like Splunk or 1010data. The data visualization and exploration vendors Qlikview, Tableau, and TIBCO Spotfire have all been helping bring agility to the operational data vendors.
I’m not sure I’d agree that these particular vendors are doing enough to make Big Data easy to understand and actionable, but I like the term “Operational Big Data.” And I agree with the thrust of his argument that the vendors who succeed at making complex data easy to visualize and take action against will be big winners.
That’s what we do at Nodeable. We make systems data as easy to read and work with as a Twitter stream, which is considerably easier to grok than IBM’s Netezza or other products from the traditional enterprise software crowd. They’re still so concerned with how “big” they can make the data that they’ve neglected that the real value is in making it “small,” that is, easily understood by someone that didn’t get the PhD in data science.