Intel is a company driven by Moore’s law and possessing a culture that eats, drinks, and breathes innovation; they are always looking ahead, continuously moving forward, and constantly pushing itself to the next level. This doesn’t happen by accident – it is driven by the mission of a company organized and managed by leaders who embrace the mission and strive to uphold it.
HP Discover is a conference that attracts some of these bright and shining stars from across the industry, and the 2012 conference held in Las Vegas this year was no different. During this year’s conference, theCube, hosted by SiliconANGLE founder John Furrier and Wikibon analyst Dave Vellante, spent some time with Kim Stevenson, Vice President Information Technology Group CIO at Intel. The main topics of discussion: Big Data and the Cloud. And, during the session, Stevenson and Furrier exchanged a number of powerful real life use cases.
Big Data was a key element of Stevenson’s keynote earlier in the day, and she provided Furrier with her own definition of the term: “Big Data is all information created (machine-generated and human generated) – all of this information fits into the Big Data envelope.” Stevenson added that “the important parts of Big Data are the pieces we have failed to contextualize in a systematic way up until now.” Essentially, we can think of Big Data as what we previously referred to as business intelligence, just taken to the next level and incorporating more types of data from more sources.
Of course, deciding what to collect, when, and from which devices and locations can be a daunting task and/or overwhelming process. The goal should be to define the collection criteria based on business decisions that the organization is trying to drive forward. Stevenson suggests that organizations take the time to “look for things that create inflections in the business.” In other words, organizations should focus on the areas where the business will benefit from decisions made with precision, accuracy, and real-time intelligence.
Most organizations have successfully tackled structured data collection and some have even overcome some of the challenges associated with structured data analysis. Unfortunately, the methods used rely on data stores that were built ”the old way”. Big Data requires a new look at how we collect and analyze information, and “the true power and incremental value comes from layering on unstructured, contextual information on top of this structured data,” claimed Stevenson. “Bring in the data, assess what’s important, and get out.”
Intel did just that for both its supply allocation and its revenue forecasting processes. “We took it even further as we evolved to fully automated factories. As a wafer moves through the factory, all process data from one manufacturing step is transferred to the next tool located in the next step. That process data becomes the baseline for the processes moving forward as opposed to assumptions being made or decisions based on what was supposed to happen. At the end of production, nearly 750TB of data can be associated with each wafer,” added Stevenson.
Big Data analytics directly and significantly improves the yields in Intel’s manufacturing process. It allows Intel to move to a predictive model. When it starts to see any amount of degradation in the process, Intel has the opportunity to repair the problem before it becomes a real business-impacting incident. This same proactive or preemptive model of incident aversion can also be seen in the security space with the likes of EMC/NetWitness/Greenplum, HP/Arcsight, and Splunk combining Big Data with security management and security incident response.
Switching gears in the discussion, the hosts and guest began to exchange real-life examples of social data and crowdsourcing as part of Big Data analytics. The key takeaway from this part of the discussion: organizations have a choice to make – they can either take the lead in Big Data or be disrupted. Clearly, it is best to take the lead.
Prompted by a question from Stevenson, Furrier shared that SiliconANGLE is a business built from Big Data, using the information and supporting technologies to reduce business risk as the team can improve the accuracy of its stories based on the information gleaned from the data from its 2.5 million users. The information is used to drive predictive analytics which is then used to identify leading trends and to define the best topics to cover and most appropriate articles to write. The team has a dashboard which provides a detailed real-time view into what’s being consumed and discussed by the target audience at any given moment. “This real-time social crowdsourcing model allows us to include the crowd in the production process,” said Furrier. “Where other publications simply deliver content to be consumed, SiliconANGLE can create, deliver, follow up, and reiterate, all based on what is happening in the community.”
Stevenson responded in kind by sharing two recent scenarios in which Intel’s bleeding-edge innovation is evident: both of which were rooted in the Cloud.
The first use case provided was that of an in-house virtualized office/enterprise application store for Intel employees in which Intel moved its office and enterprise application-provisioning services to an environment that is 75 percent virtualized and in the Cloud. “This allows us to provision our services in under an hour for all of our employees,” said Stevenson.
The second use case covered Intel’s product development and design engineer operations. Intel deployed a massive cloud-based compute infrastructure comprised of 50,000 servers hung together in a grid (aka a “clustered cloud”). “This implementation dramatically improves the throughput time for every engineering job that happens at Intel,” said Stevenson.