Collectively, we humans create about 2.5 quintillion (US1) bytes of data per day; much of this data is created in a "passive" manner, through the myriad data collection and data retrieval sources that we encounter daily. It's not surprising when you think about the growing number of data touch points we have today: Sensors, retail checkout registers, social media, RFID tags .. you name it; it's collecting data. And these devices and sensors continue to proliferate daily.
Just how large is 2.5 quintillion bytes of data?
Take a minute to digest that: 2.5 quintillion bytes of data. Per DAY. (1 quintillion = 1 followed by 18 zeros, or 1 x 1018.) That's equal to 2.5 trillion million, or 2.5 billion billion. Wow!
Even to the casual observer, it's probably glaringly obvious that there's an awful lot of data housed in the cloud and on countless servers around the globe. How do we harness this tsunami of data, make sense of it? How can it be "tapped" - sliced, diced, pureed - to yield a strategic advantage for those who have taken the time to explore its vast potential?
But "Big" exceeds number of bytes
In our previous blog post, we introduced our take on the "hot topic" of Big Data. It's a term that's so ubiquitous - in trade press and in consumer media - that sometimes there's a sense that users of the term don't have a true understanding that "Big" in this context is used in a sense that supersedes mere file size or the number of collected records. It also takes into account the analytic potential of the data: How can the data be leveraged to help the end user most effectively drive strategic business decisions or increase revenue?
As we explained in the previous post, Big Data may consist of high-volume, high-velocity and high-variety sets of information that can be used for Business Intelligence (BI), especially when traditional data analytic techniques are lacking or insufficient. To maximize its strategic value, Big Data requires expert handling via cost-effective, innovative forms of information processing; more astute processing and analytics will - unsurprisingly - typically yield enhanced insight and decision-making.
"Uniformity" of Data
In order to take full advantage of our abundance of data sets, the first thing we need to do (once we've convinced our clients to buy in to the practice of true "data aggregation") is to stop thinking of them as separate data sets. Now, to our minds, we have one - albeit huge - mass of data. A prime challenge of that construct is, of course, ensuring that the format of the data is compatible across the board, so that if I want to sort my data on, say, the age of the individual from whom the data was collected, I can quickly ascertain where in each data set the "age" information is housed. And if I want to determine collective satisfaction with customer service, then I'd better be sure that the customer satisfaction questions are asked the same way across all questionnaires, and that the same ratings scales are used (e.g., 1 to 7; 0 to 10) across the board, as well.
But just how can you begin to leverage the "Big Data" that are available today, using the tools that are available to you today? Beginning in this post, and in the subsequent two posts that follow, with a little help from Gartner analyst Doug Laney we will provide examples of how various client companies has the vision to look beyond Big Data as a buzzword, and delve into actually using it to their strategic advantage over others within their market space.
Case Summary: American Express
Company: American Express
Market Sector: Financial Services
American Express Company (AmEx) had been relying on traditional "hindsight reporting", trailing indicators, and business intelligence initiatives to continue to grow its business. The company had, however, become increasingly frustrated with what it perceived as the inherent limitations of these research methods. For all of the human and financial resources being directed at revenue growth, there was insufficient return-on-investment (ROI) and a perceived dearth of accuracy while using these methods.
So AmEx started looking for indicators that could more accurately predict loyalty amongst its customer base, and turned its focus to one of Big Data and predictive forecasting. To this end, AmEx and its research partners developed sophisticated predictive models to analyze historical transactions and used 115 variables to forecast potential churn.
As a result of this research, the company identified 24% of Australian accounts that it believed would - based on the predictive models - close within the next quarter, and they were able to take measures to try to retain those accounts. They were also able to enact steps to more proactively attract new accounts in that market to balance out any attrition.
For more on this topic, check out Part 2 of our next blog series on Turning "Big Data" from the buzzword du jour to a strategic tool!
For more information pertaining to Big Data - or any other market research-related issue or technique - please contact us or visit nebu.com/nebu-data-hub.
1NOTE: Due to divergent naming conventions, in US, Canadian, and modern British usage, a Quintillion = 1 x 1018; in traditional British use and in Continental Europe, however, a Quintillion = 1 x 1030. Either way, it's a BIG number!
Photo: Data Slide by user bionicteaching on Flickr.