There are drawbacks to using MapReduce:
Hadoop has lots of boilerplate/repetition + is tedious to program
Not all computations are suited for the MapReduce model
e.g. SQL is usually nice
e.g. performing iteration - lots of disk I/O, which makes MapReduce a lot slower
e.g. sending a map task to another map is possible with ChainMapper, but sending a reduce task to another reduce (i.e. chaining reduces) is impossible
Could address these issues via Hadoop 2, but can also use Spark