There are file based and database-centric applications of MapReduce in existence. Of course, the presumption is that your “problem space” can be split up in distinct pieces and recombined without information loss. And not all problems are either large enough or mathematically suited to this approach. But luckily data management is, by definition, perfectly well suited because, as a mathematician once told me “every data management task can be broken down into two and only two activities: partitioning and equivalence”.
Some folks think MapReduce is a modern breakthrough concept, but they’re wrong. The application of this algorithm to the management of large data is nothing new, as pointed out by Dr. Stonebraker in a 2008 posting. What’s “new” about MapReduce is that Google has popularized it. And the thought is, if Google can process and analyze the entire world via MapReduce, then clearly MapReduce must be the Holy Grail of monster data management. But Google has unique challenges (gargantuan data volumes), and some very impressive (and plentiful) gray matter at its disposal.
Because the interesting thing about this “divide and conquer” approach is that, although fairly easy to conceptualize, it’s incredibly hard to implement properly. The human brain is really not “wired” to think in parallel. Research has shown that the top brains can at best juggle seven different objects simultaneously (and for short time periods). To understand the intellectual challenges at play here, I strongly recommend watching this Google video series.
As I understand it, implementing MapReduce correctly and efficiently is probably as hard as conquering multi-threaded programming. And in twenty years, I have met three people who really understood multi-threading correctly and two of them were Russian PhDs. I've had battle-tested architects tell me they would rather shave with broken glass than tackle the risk and difficulty of multi-threading (luckily, they weren’t designing operating or flight-control systems!). My point is, it takes some pretty special skill and talent to do it right. Nothing inherently wrong with that, but it’s neither quick nor cheap.
So why then would database vendors race to support MapReduce? After all, dealing with and managing relational systems is complicated enough as is. But at least people have been trained in the art for decades, and SQL is lingua franca. So the pitfalls and solutions are well established. Additionally, Codd’s premise guaranteed abstraction by separating the logical layer (SQL and a normalized schema) from the physical one (hardware and storage). But MR is heavy with non-standard cross-layer implementation details by necessity. Clearly a step backward from the KISS principle (even a “major step backwards” if you buy into Dr. Stonebraker’s argument that MapReduce is offensive).
To learn more, I went to YouTube and checked out Aster’s video “In Database MapReduce Applications”. In it, I learned that graph theory problems (think: travelling salesman) were well suited to MapReduce but not SQL. Examples included social networking (LinkedIn), Government (intelligence), Telecom (routing statistic), and retail (CRM, affinities), and finance (risk, fraud). Pretty much anything that can be modeled using interconnected nodes. But a connection from a node to another is really a “relation”, and so clearly well suited to a “relational engine”. So I might have missed something.
Aster’s implementation of MapReduce is “deep inside” their engine, from what I understand. One example I could find was yet another YouTube video called “In-Database MapReduce Example: Sessionize”. In it, Shawn Kung shows a MapReduce function being used inside a SQL statement to “sessionize” user IDs in a clickstream context. Aster also provides very basic how-to’s on their website and blog. Clearly Aster is targeting this new MapReduce capability at the DBA side of their users, and it looks a lot like leveraging UDFs to me. Aster’s conclusion: “we need to think beyond conventional databases.” I’m all for that!
Next, I wanted to learn about Vertica’s implementation. Especially since Vertica’s own Dr. Stonebraker had initially nailed MapReduce pretty hard as mentioned above. But Vertica’s new position seems to be that MapReduce is a-ok after all, provided it remains external and doesn't pollute the purity of the relational engine. I couldn't find much on YouTube or their website save for a press release dated 8/4/09 stating “With version 3.5, Vertica also introduces native support for MapReduce via connectivity to the standard Hadoop framework”. It seems the “scoop” on the Vertica/MapReduce wedding is best described in their corporate blog. Basically Vertica is OK with "integrating" or connecting to but not "ingesting" MapReduce (via Hadoop) if I understand clearly.
I was also able to glean some tidbits from Omer Trajman on Twitter. Namely that Vertica supports Hadoop “adapters” which allow you to read and write into the database (which is basically the press release). I wish I had more in-depth information about Vertica’s MR functionality but even a basic search for the term on their overly busy website yields zero information and, unless I missed it, I couldn’t find any relevant webcasts either.
Greenplum were, if I am not mistaken, first to support MapReduce. Greenplum has the best MR resource online if you ask me. It’s clear, detailed and full of insight. Greenplum has a merged/cooperative DBA/programmer approach in their offering. Programmers can write maps and reducers in their language of choice, leveraging DBA generated data sets as (and if) needed, and DBAs can use MR functions along with SQL without (presumably) getting their hands dirty. There isn’t much to add to this excellent resource so I won’t.
(2) You don’t have heavy legacy systems. Integrating (or migrating) existing business and relational code with a new-breed MR-enabled engine can’t be fun, quick or cheap. You might be one of the lucky few with pet projects on the table.
(3) You’re in Academia and have access to numerous cheap and competent programming resources, lots of metal, plenty of time, and limited pressure to succeed.
(4) Your organization has a track record of successful projects dependant on symbiotic working relationships between your DBAs and your programmers. In my experience, DBAs and programmers don’t work well together. They have different goals and approaches. And it seems intellectual and political integration of both resources would be a sine qua non condition to success with an MR database product.
Short of that, I can’t imagine too many people lining up at the MR-ADBMS vendors’ doors simply based on their MapReduce capabilities. And I don’t think vendors make that case either. In my opinion, supporting MR in the product simply says “Hey, look at me, I’m at the forefront of technology. See how smart I am.” But as a buyer, I’d be a little concerned about overreach.
In fact, I wonder how these vendors spread resources efficiently (and economically!) between database engine building, cloud provisioning (which Aster and Vertica now pitch), and MapReduce integration. I suppose marketing requires less focus than engineering as a discipline but still, that’s a lot on one’s plate.