We live in the world of Big Data, with an increasingly computerized world spitting out massive quantities of data. While it’s theoretically nice to have all that data, the real trick is putting data to work. Which means, as I’ve written before, we need to figure out how to make Big Data “small.”
It turns out that this isn’t very easy.
Which is probably why über-investor Vinod Khosla points to the concept of “Data Reduction” as one of the huge, still largely untapped opportunities in technology. By Data Reduction he means “Reducing, filtering and processing data streams to deliver the information or action that is relevant to you.” It is increasingly clear that our problem isn’t creating abundance (of data, open-source code, or many other things), but rather parsing this abundance so that it’s relevant and useful. This is as true for Facebook as it is for Red Hat.
Or for your average enterprise IT personnel. Even the act of managing an enterprise’s IT is increasingly a Big Data concern. As The 451 Group analyst Rachel Chalmers indicates:
The way we build and manage infrastructure is about to change….Machines can scale, but owing to the time-consuming and difficult-to-automate chores of raising and educating human children, the pool of people talented enough to manage them scales only by a ten- or twenty-year lag. Hence, the rise of next-generation, cloud-scale performance management; capacity planning; and intelligent workload-placement systems.
This becomes easier as systems gather a wide body of data, compare it to a particular user’s performance, and highlight the deltas and suggest ways to improve. This is what Nodeable and other companies are starting to do, and it will make developers and operations personnel much more effective in their respective roles.
Ultimately, the company that delivers more signal and less noise will win.