You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Eriksson Magnus <ma...@scania.com> on 2014/08/25 14:10:41 UTC

Problem adding index to largwe table in Hive

We have a table imported from a SQL Server based data warehouse with about 9.5 billion rows. We have found a very good partitioning key that has resulted in about 1500 partitions giving us fairly good response time on some classes of queries.

In order to speed up other types of queries we would however have liked to try adding an index to the table but when we execute the "alter" statement to actually build it this fails with out of memory exceptions. For a partitioned table the index for each partition seems to be built individually so in this case about 1500 separate jobs are launched (we have tried both to let them run one at a time and several in parallel with the same result) and a large number of jobs actually seem to run as they should but after a while we start to see jobs failing and after this point most of the remaining jobs actually fail.

We have a 10 machine Hadoop cluster (2 pure admin and 8 slave machines). The slave machines each have 72 GB of RAM, 8 cores and 8 data disks. We use Hadoop v 2.4.1 and Hive v 0.13.1
.

Before we got this far we had problems with running out of heap in the hive "console" but after increasing the max size we instead run into this new problem. We have tried various heap sizes but it seems to not make any difference (or at least not make the job complete without errors).

At the moment we have the following in mapred-site.xml:
  <property>
    <name>mapreduce.map.memory.mb</name>                  <value>4096</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>               <value>8192</value>
  </property>
  <property>
    <name>mapred.tasktracker.map.tasks.maximum</name>     <value>8</value>
  </property>
  <property>
    <name>mapred.tasktracker.reduce.tasks.maximum</name>  <value>8</value>
  </property>
  <property>
    <name>mapreduce.map.java.opts</name>                  <value>-Xmx3277m -XX:-UseGCOverheadLimit -XX:+UseConcMarkSweepGC</value>
  </property>
  <property>
    <name>mapreduce.reduce.java.opts</name>               <value>-Xmx6554m -XX:-UseGCOverheadLimit -XX:+UseConcMarkSweepGC</value>
  </property>


A part of the hive console log from the run is included below (output from most of the successful jobs have been removed to make it shorter):

alter index ecu_id on factsamplevalue rebuild;
Total jobs = 1567
Launching Job 1 out of 1567
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Launching Job 2 out of 1567
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Launching Job 3 out of 1567
...

Hadoop job information for Stage-2281: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:17,978 Stage-2281 map = 0%,  reduce = 0%
Starting Job = job_1408963501716_0597, Tracking URL = http://l1032lab.sss.se.scania.com:8088/proxy/application_1408963501716_0597/
Kill Command = /opt/hadoop-ro/hadoop-2.4.1-sse4.2/bin/hadoop job  -kill job_1408963501716_0597
Starting Job = job_1408963501716_0596, Tracking URL = http://l1032lab.sss.se.scania.com:8088/proxy/application_1408963501716_0596/
Kill Command = /opt/hadoop-ro/hadoop-2.4.1-sse4.2/bin/hadoop job  -kill job_1408963501716_0596
Hadoop job information for Stage-2297: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:18,707 Stage-513 map = 100%,  reduce = 100%, Cumulative CPU 648.88 sec
2014-08-25 13:03:18,709 Stage-2297 map = 0%,  reduce = 0%
Hadoop job information for Stage-2265: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:18,792 Stage-2265 map = 0%,  reduce = 0%
Hadoop job information for Stage-2293: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:18,855 Stage-2293 map = 0%,  reduce = 0%
2014-08-25 13:03:19,226 Stage-2249 map = 100%,  reduce = 0%, Cumulative CPU 4.75 sec
Launching Job 598 out of 1567
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Launching Job 599 out of 1567
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Hadoop job information for Stage-2285: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:19,660 Stage-2177 map = 100%,  reduce = 100%, Cumulative CPU 72.88 sec
2014-08-25 13:03:19,699 Stage-2285 map = 0%,  reduce = 0%
MapReduce Total cumulative CPU time: 10 minutes 48 seconds 880 msec
Ended Job = job_1408963501716_0129 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://l1032lab.sss.se.scania.com:8088/proxy/application_1408963501716_0129/
Examining task ID: task_1408963501716_0129_m_000002 (and more) from job job_1408963501716_0129

Task with the most failures(4): 
-----
Task ID:
  task_1408963501716_0129_m_000000

URL:
  http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1408963501716_0129&tipid=task_1408963501716_0129_m_000000
-----
Diagnostic Messages for this Task:
Error: Java heap space

Hadoop job information for Stage-2313: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:20,061 Stage-2313 map = 0%,  reduce = 0%
Starting Job = job_1408963501716_0598, Tracking URL = http://l1032lab.sss.se.scania.com:8088/proxy/application_1408963501716_0598/
Kill Command = /opt/hadoop-ro/hadoop-2.4.1-sse4.2/bin/hadoop job  -kill job_1408963501716_0598
Starting Job = job_1408963501716_0599, Tracking URL = http://l1032lab.sss.se.scania.com:8088/proxy/application_1408963501716_0599/
Kill Command = /opt/hadoop-ro/hadoop-2.4.1-sse4.2/bin/hadoop job  -kill job_1408963501716_0599
2014-08-25 13:03:20,647 Stage-2241 map = 100%,  reduce = 100%, Cumulative CPU 15.39 sec
Hadoop job information for Stage-2305: number of mappers: 1; number of reducers: 1
Hadoop job information for Stage-2301: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:20,859 Stage-2305 map = 0%,  reduce = 0%
2014-08-25 13:03:20,869 Stage-2301 map = 0%,  reduce = 0%
Hadoop job information for Stage-2309: number of mappers: 1; number of reducers: 1
2014-08-25 13:03:20,906 Stage-2309 map = 0%,  reduce = 0%
2014-08-25 13:03:21,089 Stage-2229 map = 100%,  reduce = 100%, Cumulative CPU 14.36 sec
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Total cumulative CPU time: 15 seconds 390 msec
Ended Job = job_1408963501716_0561
2014-08-25 13:03:21,809 Stage-505 map = 100%,  reduce = 100%
2014-08-25 13:03:21,817 Stage-2153 map = 0%,  reduce = 0%, Cumulative CPU 113.94 sec
2014-08-25 13:03:21,870 Stage-2269 map = 100%,  reduce = 0%, Cumulative CPU 4.56 sec
MapReduce Total cumulative CPU time: 14 seconds 360 msec
Ended Job = job_1408963501716_0558
2014-08-25 13:03:22,165 Stage-2253 map = 100%,  reduce = 0%, Cumulative CPU 1.63 sec
2014-08-25 13:03:22,396 Stage-2261 map = 100%,  reduce = 0%, Cumulative CPU 1.71 sec
MapReduce Total cumulative CPU time: 2 minutes 11 seconds 650 msec
Ended Job = job_1408963501716_0520
MapReduce Total cumulative CPU time: 7 minutes 52 seconds 690 msec
2014-08-25 13:03:23,268 Stage-2257 map = 100%,  reduce = 0%, Cumulative CPU 4.9 sec
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.InterruptedException: sleep interrupted
                at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:339)
                at org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:386)
                at org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:539)
                at org.apache.hadoop.mapreduce.Job$5.run(Job.java:668)
                at org.apache.hadoop.mapreduce.Job$5.run(Job.java:665)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:415)
                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
                at org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:665)
                at org.apache.hadoop.mapred.JobClient$NetworkedJob.getTaskCompletionEvents(JobClient.java:349)
                at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.computeReducerTimeStatsPerJob(HadoopJobExecHelper.java:599)
                at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
                at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426)
                at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
                at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
                at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
                at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
Caused by: java.lang.InterruptedException: sleep interrupted
                at java.lang.Thread.sleep(Native Method)
                at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:336)
                ... 16 more
Job Submission failed with exception 'org.apache.hadoop.yarn.exceptions.YarnRuntimeException(java.lang.InterruptedException: sleep interrupted)'
2014-08-25 13:03:23,758 Stage-2273 map = 100%,  reduce = 0%, Cumulative CPU 1.65 sec
MapReduce Total cumulative CPU time: 1 minutes 12 seconds 880 msec
Ended Job = job_1408963501716_0545
2014-08-25 13:03:23,922 Stage-521 map = 100%,  reduce = 100%
2014-08-25 13:03:24,059 Stage-2029 map = 100%,  reduce = 100%, Cumulative CPU 322.05 sec
2014-08-25 13:03:24,218 Stage-2237 map = 100%,  reduce = 100%, Cumulative CPU 4.04 sec

And a part of the relevant log from the slave node:

2014-08-25 12:59:17,409 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 69000000 rows: used memory = 2926268640
2014-08-25 12:59:18,523 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 70000000 rows: used memory = 3025569624
2014-08-25 12:59:19,711 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 71000000 rows: used memory = 3096431600
2014-08-25 12:59:20,945 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 72000000 rows: used memory = 3167293592
2014-08-25 12:59:24,809 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 73000000 rows: used memory = 3258931896
2014-08-25 12:59:26,888 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 74000000 rows: used memory = 3321410768
2014-08-25 12:59:41,818 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 75000000 rows: used memory = 2993477656
2014-08-25 12:59:43,075 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 76000000 rows: used memory = 3052945024
2014-08-25 12:59:44,401 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 77000000 rows: used memory = 3128201992
2014-08-25 12:59:45,791 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 78000000 rows: used memory = 3187669360
2014-08-25 12:59:47,698 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 79000000 rows: used memory = 3262517304
2014-08-25 13:00:03,790 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 80000000 rows: used memory = 3068037896
2014-08-25 13:00:04,897 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 81000000 rows: used memory = 3137882328
2014-08-25 13:00:06,082 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 82000000 rows: used memory = 3196086032
2014-08-25 13:00:07,751 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 83000000 rows: used memory = 3270157880
2014-08-25 13:00:23,792 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 84000000 rows: used memory = 3187151008
2014-08-25 13:00:25,355 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processing 85000000 rows: used memory = 3242004960
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: 4 finished. closing... 
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: DESERIALIZE_ERRORS:0
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.TableScanOperator: 0 finished. closing... 
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.SelectOperator: 1 finished. closing... 
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: 2 finished. closing... 
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 3 finished. closing... 
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 3 Close done
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: 2 Close done
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.SelectOperator: 1 Close done
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.TableScanOperator: 0 Close done
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: 4 Close done
2014-08-25 13:00:56,939 INFO [main] org.apache.hadoop.hive.ql.exec.mr.ExecMapper: ExecMapper: processed 85796235 rows: used memory = 3210477536
2014-08-25 13:00:56,944 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space
        at java.util.HashMap.resize(HashMap.java:584)
        at java.util.HashMap.addEntry(HashMap.java:883)
        at java.util.HashMap.put(HashMap.java:509)
        at java.util.HashSet.add(HashSet.java:217)
        at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMkCollectionEvaluator.putIntoCollection(GenericUDAFMkCollectionEvaluator.java:155)
       at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMkCollectionEvaluator.iterate(GenericUDAFMkCollectionEvaluator.java:122)
        at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
        at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
        at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
        at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
        at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

--------------------
Any suggestions about what may be the problem or about how to further diagnose the problem is very much appreciated!