You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Gopal V (JIRA)" <ji...@apache.org> on 2013/02/07 16:27:13 UTC

[jira] [Created] (HIVE-3997) Use distributed cache to cache/localize dimension table & filter it in map task setup

Gopal V created HIVE-3997:
-----------------------------

             Summary: Use distributed cache to cache/localize dimension table & filter it in map task setup
                 Key: HIVE-3997
                 URL: https://issues.apache.org/jira/browse/HIVE-3997
             Project: Hive
          Issue Type: Improvement
            Reporter: Gopal V
            Assignee: Gopal V


The hive clients are not always co-located with the hadoop/hdfs cluster.

This means that the dimension table filtering, when done on the client side becomes very slow. Not only that, the conversion of the small tables into hashtables has to be done every single time a query is run with different filters on the big table.

That entire hashtable has to be part of the job, which involves even more HDFS writes from the far client side.

Using the distributed cache also has the advantage that the localized files can be kept between jobs instead of firing off an HDFS read for every query.

Moving the operator pipeline for the hash generation into the map task itself has perhaps a few cons.

The map task might OOM due to this change, but it will take longer to recover until all the map attempts fail, instead of being conditional on the client. The client has no idea how much memory the hashtable needs and has to rely on the disk sizes (compressed sizes, perhaps) to determine if it needs to fall back onto a reduce-join instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira