Tresata Goes Deep on Big Data for Banking

READ MORE

It’s a question that all new software vendors face: Should we deliver a horizontal, customizable platform that appeals to multiple industries or specialize in a single vertical with a highly-specialized software suite and do it better than anybody else?

Tresata has decided to go with the latter option. As I describe in detail in a previous post, the North Carolina-based start-up has developed a Big Data-as-a-Service offering to deliver consumable insights to their clients (namely banks, financial data companies other financial services companies) via a cloud-based platform.

But in addition to its cloud-based delivery, Tresata is also differentiating itself with its singular focus on Big Data Analytics for financial firms. They are also the first company to do it by building their entire analytics platform on Hadoop, one of the hottest Big Data technologies today.

Tresata founder Abhishek Mehta, widely recognized as a Big Data pioneer in banking and financial services, believes taking a vertical approach to Big Data Analytics is the surest and quickest way to deliver value to customers.

Each vertical is different – hell, each company is different – and developing the algorithms and other analytic processes necessary to answer vertical-specific queries is not a trivial task. In fact, you need deep domain knowledge to do it well as both the key performance metrics and semantics of the data vary widely from industry to industry, Mehta told me in a recent interview. Propensity modeling for car loan applicants, for example, requires a different approach from propensity modeling for home mortgage applicants.

To that end, Tresata has developed what it calls ‘Analytics Containers’, pre-built data standards, algorithms and queries targeted to highly specific financial services business problems.  The containers are built on top of Tresata’s massively parallel Data Assembly Line that can ingest, process, merge and score at an individual unit level and produce a single view of customer at scale.

Another factor is the pain threshold for false-positives. The acceptable margin for error in Big Data Analytics is significantly smaller when you’re talking about non-Web 2.0 use cases. Bill Schmarzo, EMC’s head of information management consulting, made a similar point when we spoke at Strata recently. If Google serves up the wrong ad to a user, for example, the marginal cost is zero. If a financial firm stakes a position in a volatile market due to bad analysis, the cost could be millions of dollars. So you need someone building your analytic models that understands the nuances and priorities of your particular business.

That’s where Tresata has an edge on competing analytics platforms from the likes of Karmasphere. Other Hadoop-based analytic offerings on the market take a horizontal approach. Karmasphere’s analytics development platform, for example, isn’t tailored to any particular vertical market but instead is meant to appeal to Data Scientists in any market looking to build applications on top of Hadoop. That approach requires Karmasphere customers to either hire or outsource their own teams of experienced Data Scientists and developers with experience in their particular industry – no small task considering the thin ranks of Data Scientists these days.

Karmasphere just secured another $6 million in funding, so VC’s obviously see potential in its platform/approach to Big Data analytics, as do I. What I do like about Karmasphere’s platform is that it is more scalable than a specialized, vertical tool. I can see Karmasphere selling to VARs that do have vertical industry expertise that need a Hadoop platform to build analytic applications they will then sell to the enterprise. So a retail-focused VAR could just as easily use Karmasphere’s platform as a healthcare-focused VAR to build Big Data applications – assuming they have the internal expertise and talent to work with it. And developing new applications via Karmasphere will take time.

ServicesAngle

Tresata’s combination of Big Data-as-a-Service delivery and vertical expertise puts it in a better position to deliver value to its customers today rather than next month or next year. And in a market developing as quickly as the Big Data Analytics space, that is definitely a substantial go-to-market advantage. 

About Jeffrey Kelly

Jeffrey F. Kelly is a Principal Research Contributor at The Wikibon Project, an open source research and advisory firm based in Boston. His research focus is the business impact of Big Data and the emerging Data Economy. Mr. Kelly's research has been quoted and referenced by the Wall Street Journal, the Financial Times, Forbes, CIO.com, IDG News, TechTarget and more. Reach him by email at jeff.kelly@wikibon.org or Twitter at @jeffreyfkelly.