Wednesday, September 30, 2009

Baking MapReduce into Database Engines - Worth the Reduction Sauce?

MapReduce (implementations include Hadoop and CloudDB) has gained popularity in the industry. It also serves as marketing fodder for several new-breed ADBMS vendors who now claim to support it in various forms. So what is really behind this magic pixel dust, what problems does it solve, and how relevant is it to someone deciding on a new (or additional) ADBMS platform these days?

First, let’s point out MapReduce is not a technology but an algorithm. Wikipedia defines an algorithm as “an effective method for solving a problem using a finite sequence of instructions.” In MapReduce’s case, the problem being solved is the processing and analysis of very large data sets. The solution is a parallelized divide-and-conquer approach and works like this. First, you split up the “problem” into small manageable chunks. Second, you fan out each chunk in parallel to individual “work units” (maps). Third, you take individual results from each unit and recombine them into your final result (reducers). In SQL parlance, conceptually, it’s like doing a select aggregate with a group by.

There are file based and database-centric applications of MapReduce in existence. Of course, the presumption is that your “problem space” can be split up in distinct pieces and recombined without information loss. And not all problems are either large enough or mathematically suited to this approach. But luckily data management is, by definition, perfectly well suited because, as a mathematician once told me “every data management task can be broken down into two and only two activities: partitioning and equivalence”.

Some folks think MapReduce is a modern breakthrough concept, but they’re wrong. The application of this algorithm to the management of large data is nothing new, as pointed out by Dr. Stonebraker in a 2008 posting. What’s “new” about MapReduce is that Google has popularized it. And the thought is, if Google can process and analyze the entire world via MapReduce, then clearly MapReduce must be the Holy Grail of monster data management. But Google has unique challenges (gargantuan data volumes), and some very impressive (and plentiful) gray matter at its disposal.

Because the interesting thing about this “divide and conquer” approach is that, although fairly easy to conceptualize, it’s incredibly hard to implement properly. The human brain is really not “wired” to think in parallel. Research has shown that the top brains can at best juggle seven different objects simultaneously (and for short time periods). To understand the intellectual challenges at play here, I strongly recommend watching this Google video series.

As I understand it, implementing MapReduce correctly and efficiently is probably as hard as conquering multi-threaded programming. And in twenty years, I have met three people who really understood multi-threading correctly and two of them were Russian PhDs. I've had battle-tested architects tell me they would rather shave with broken glass than tackle the risk and difficulty of multi-threading (luckily, they weren’t designing operating or flight-control systems!). My point is, it takes some pretty special skill and talent to do it right. Nothing inherently wrong with that, but it’s neither quick nor cheap.

So why then would database vendors race to support MapReduce? After all, dealing with and managing relational systems is complicated enough as is. But at least people have been trained in the art for decades, and SQL is lingua franca. So the pitfalls and solutions are well established. Additionally, Codd’s premise guaranteed abstraction by separating the logical layer (SQL and a normalized schema) from the physical one (hardware and storage). But MR is heavy with non-standard cross-layer implementation details by necessity. Clearly a step backward from the KISS principle (even a “major step backwards” if you buy into Dr. Stonebraker’s argument that MapReduce is offensive).

Regardless, three well-known new-breeders, namely Aster Data, Vertica and Greenplum jumped on the bandwagon early on and announced “MapReduce implementations” for their product. I wondered what compelled them to invest time and resources into something that didn’t seem essential (or cheap) to the market at large. Are users really clamoring for MapReduce support in their warehouse engines?

To learn more, I went to YouTube and checked out Aster’s video “In Database MapReduce Applications”. In it, I learned that graph theory problems (think: travelling salesman) were well suited to MapReduce but not SQL. Examples included social networking (LinkedIn), Government (intelligence), Telecom (routing statistic), and retail (CRM, affinities), and finance (risk, fraud). Pretty much anything that can be modeled using interconnected nodes. But a connection from a node to another is really a “relation”, and so clearly well suited to a “relational engine”. So I might have missed something.

I also learned that existing applications typically extracted data from the database, performed some analytic work on it, and then pushed the data back into the store. In other words, they couldn’t perform processing inside the database. I found that generalization hard to swallow but reminiscent of numerous past battles on whether “business logic” belongs in the application or database layer.

Aster’s implementation of MapReduce is “deep inside” their engine, from what I understand. One example I could find was yet another YouTube video called “In-Database MapReduce Example: Sessionize”. In it, Shawn Kung shows a MapReduce function being used inside a SQL statement to “sessionize” user IDs in a clickstream context. Aster also provides very basic how-to’s on their website and blog. Clearly Aster is targeting this new MapReduce capability at the DBA side of their users, and it looks a lot like leveraging UDFs to me. Aster’s conclusion: “we need to think beyond conventional databases.” I’m all for that!

Next, I wanted to learn about Vertica’s implementation. Especially since Vertica’s own Dr. Stonebraker had initially nailed MapReduce pretty hard as mentioned above. But Vertica’s new position seems to be that MapReduce is a-ok after all, provided it remains external and doesn't pollute the purity of the relational engine. I couldn't find much on YouTube or their website save for a press release dated 8/4/09 stating “With version 3.5, Vertica also introduces native support for MapReduce via connectivity to the standard Hadoop framework”. It seems the “scoop” on the Vertica/MapReduce wedding is best described in their corporate blog. Basically Vertica is OK with "integrating" or connecting to but not "ingesting" MapReduce (via Hadoop) if I understand clearly.

I was also able to glean some tidbits from Omer Trajman on Twitter. Namely that Vertica supports Hadoop “adapters” which allow you to read and write into the database (which is basically the press release). I wish I had more in-depth information about Vertica’s MR functionality but even a basic search for the term on their overly busy website yields zero information and, unless I missed it, I couldn’t find any relevant webcasts either.

Greenplum were, if I am not mistaken, first to support MapReduce. Greenplum has the best MR resource online if you ask me. It’s clear, detailed and full of insight. Greenplum has a merged/cooperative DBA/programmer approach in their offering. Programmers can write maps and reducers in their language of choice, leveraging DBA generated data sets as (and if) needed, and DBAs can use MR functions along with SQL without (presumably) getting their hands dirty. There isn’t much to add to this excellent resource so I won’t.

So having mapped out all these facts, what can we reduce from it (I’m so funny) and more importantly, should any of this stuff matter to prospects when evaluating ADBMS vendors? IMHO, you might benefit from a MR-enabled ADBMS if:

(1) You have petabytes (or more) of data, an MPP architecture, and a search, scientific research, or mining problem a high-performance SQL engine cannot handle.

(2) You don’t have heavy legacy systems. Integrating (or migrating) existing business and relational code with a new-breed MR-enabled engine can’t be fun, quick or cheap. You might be one of the lucky few with pet projects on the table.

(3) You’re in Academia and have access to numerous cheap and competent programming resources, lots of metal, plenty of time, and limited pressure to succeed.

(4) Your organization has a track record of successful projects dependant on symbiotic working relationships between your DBAs and your programmers. In my experience, DBAs and programmers don’t work well together. They have different goals and approaches. And it seems intellectual and political integration of both resources would be a sine qua non condition to success with an MR database product.

Short of that, I can’t imagine too many people lining up at the MR-ADBMS vendors’ doors simply based on their MapReduce capabilities. And I don’t think vendors make that case either. In my opinion, supporting MR in the product simply says “Hey, look at me, I’m at the forefront of technology. See how smart I am.” But as a buyer, I’d be a little concerned about overreach.

In fact, I wonder how these vendors spread resources efficiently (and economically!) between database engine building, cloud provisioning (which Aster and Vertica now pitch), and MapReduce integration. I suppose marketing requires less focus than engineering as a discipline but still, that’s a lot on one’s plate.

9 comments:

  1. Nice post, Jerome and thanks for watching my Aster youtube video!:)

    A few quick clarifications:

    (1) Aster was the first ADBMS to launch MapReduce in-database (photo finish w/ Greenplum on the same day).

    (2) There's a broader population of customers that would benefit from a MR-enabled database. You don't need petabytes of data - even hundreds of GBs to low terabytes would yield advantages. You also don't necessary need numerous programming resources, at least with the Aster solution - we pre-package MapReduce functions as SQL extensions that can be seamlessly invoked by SQL developers and data analysts (either via a SQL CLI or thru a GUI business intelligence tool like Microstrategy).

    (3) Regarding graphs, it's very difficult to semantically express and store large-scale graphs (data exposion of edge/vertex combinations) and query it via traditional SQL. We've had experience with customers doing this and an MR-enabled database is essentially for deep insights.

    (4) You mentioned Dr. Stonebraker's research. Aster actually has a paper of its own presented at the 35th Annual VLDB conference. Check out our blog: http://www.asterdata.com/blog/index.php/category/mapreduce

    ReplyDelete
  2. Thanks Shawn, I'm glad you enjoyed it.

    I appreciate the additional insight and clarifications you bring to the topic.

    What's still not clear to me is the conceptual difference between a SQL developer invoking your canned MR packages vs. standard UDFs.

    Best,
    J.

    ReplyDelete
  3. Thanks for this post, Jérôme,

    I read a lot about Mapreduce, but it is the first time things and stakes appear so clearly.

    Seen from my side of the Atlantic (Europe), I doubt Mapreduce has a market here simply because of "non gargantuan" volumes of data.

    Merci encore Jérôme pour cet excellent article de ce qu'est MR,

    Bernard

    ReplyDelete
  4. Bernard,
    you're quite welcome and thanks for following the blog!

    Je pense qu'il y a surement des applications MR en Europe dans le domaine de la recherche (style CERN) et le bio-tech. Je n'ai pas fait de recherche formelle mais ca serait un excellent excercise à accomplir en effet.

    Au plaisir!

    ReplyDelete
  5. UDFs take scalar arguments and return a scalar.

    SQL/MR functions take 0..n rows and return 0..m rows with columns of the developer's choice. Kind of like a unix pipe. In fact they have a sample function that runs a unix pipe, rows in as stdin, rows out as stdout.

    ReplyDelete
  6. Jerome - great question on the difference between SQL/MR functions and standard UDFs. Here's a blog post that conveys the differences:

    http://www.asterdata.com/blog/index.php/2008/08/27/how-asters-in-database-mapreduce-takes-udfs-to-the-next-level/

    ReplyDelete
  7. Thank you for pointing this out Shawn. Between that and

    http://hypecycles.wordpress.com/2009/10/03/mapreduce-1/

    I'm going to be doing some interesting careful reading indeed and try and learn something!

    ReplyDelete
  8. @Shawn - so, I now get it on the SQL/MR vs. UDF, fine. But now, one of the points Amrit makes in his series goes to portability issues. In otherwords, if I invest in Aster's MR-enabled SQL, I cannot leverage this code on any other platform right? So I'm curious how this argument compares against a "connector" approach like Vertica which seems like less of a lock-in to me -- provided other vendors eventually support Hadoop connectors of course, which seems to me easier than pulling the MR into the engine internals.
    Thanks
    J.

    ReplyDelete
  9. The comment about map-reduce programming being as hard as threaded programming really misses the point of why map-reduce is useful.

    As far as I can tell, you are telling the world what map-reduce is without reading either the original google paper or the Hadoop tutorials. You also seem to have never written any map-reduce programs. You also missed tools like Cascading, Pig and Hive, which make map-reduce programs much easier to use.

    Michael Stonebraker is a smart guy when it comes to relational databased, but in my conversations with him, it has been quite clear that he hasn't actually tried writing any programs using map-reduce method and doesn't seem likely to try any time soon. This means that you should take what he says about the defects of map-reduce with a huge grain of salt. You might add the rest of the shaker as well given his very large financial conflict of interest due to his large stake in Vertica.

    I have been using map-reduce for 5 years now and can absolutely say that for many problems, it is much easier than threaded programming and it is vastly easier than most other approaches to large scale distributed programming. You don't need gargantuan data to make effective use of map-reduce, every web-site with more than a quarter million visits a month generates enough data to make map-reduce an interesting tool for log reduction if nothing else.

    ReplyDelete