You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Thomas Friedrich (JIRA)" <ji...@apache.org> on 2014/11/07 23:54:33 UTC

[jira] [Updated] (HIVE-8508) UT: fix bucketsort_insert tests - related to SMBMapJoinOperator

     [ https://issues.apache.org/jira/browse/HIVE-8508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Thomas Friedrich updated HIVE-8508:
-----------------------------------
    Description: 
The 4 tests
bucketsortoptimize_insert_2
bucketsortoptimize_insert_4
bucketsortoptimize_insert_6
bucketsortoptimize_insert_7
bucketsortoptimize_insert_8

all fail with the same NPE related in SMBMapJoinOperator:

order object is null in SMBMapJoinOperator:
// fetch the first group for all small table aliases
for (byte pos = 0; pos < order.length; pos++) {
if (pos != posBigTable)
{ fetchNextGroup(pos); }

Daemon Thread [Executor task launch worker-3] (Suspended (exception NullPointerException))
SMBMapJoinOperator.processOp(Object, int) line: 258
FilterOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
FilterOperator.processOp(Object, int) line: 137
TableScanOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
TableScanOperator.processOp(Object, int) line: 95
MapOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
MapOperator.process(Writable) line: 536
SparkMapRecordHandler.processRow(Object, Object) line: 139
HiveMapFunctionResultList.processNextRecord(Tuple2<BytesWritable,BytesWritable>) line: 47
HiveMapFunctionResultList.processNextRecord(Object) line: 28
HiveBaseFunctionResultList$ResultIterator.hasNext() line: 108
Wrappers$JIteratorWrapper<A>.hasNext() line: 41
Iterator$class.foreach(Iterator, Function1) line: 727
Wrappers$JIteratorWrapper<A>(AbstractIterator<A>).foreach(Function1<A,U>) line: 1157
RDD$$anonfun$foreach$1.apply(Iterator<T>) line: 760
RDD$$anonfun$foreach$1.apply(Object) line: 760
SparkContext$$anonfun$runJob$3.apply(TaskContext, Iterator<T>) line: 1118
SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
ResultTask<T,U>.runTask(TaskContext) line: 61
ResultTask<T,U>(Task<T>).run(long) line: 56
Executor$TaskRunner.run() line: 182
ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
ThreadPoolExecutor$Worker.run() line: 615
Thread.run() line: 745

There is also a NPE in the FileSinkOperator: the FileSystem object fs is null:
// in recent hadoop versions, use deleteOnExit to clean tmp files.
if (isNativeTable) {
autoDelete = fs.deleteOnExit(fsp.outPaths[0]);

Daemon Thread [Executor task launch worker-1] (Suspended (exception NullPointerException))
FileSinkOperator.createBucketFiles(FileSinkOperator$FSPaths) line: 495
FileSinkOperator.closeOp(boolean) line: 925
FileSinkOperator(Operator<T>).close(boolean) line: 582
SelectOperator(Operator<T>).close(boolean) line: 594
SMBMapJoinOperator(Operator<T>).close(boolean) line: 594
DummyStoreOperator(Operator<T>).close(boolean) line: 594
FilterOperator(Operator<T>).close(boolean) line: 594
TableScanOperator(Operator<T>).close(boolean) line: 594
MapOperator(Operator<T>).close(boolean) line: 594
SparkMapRecordHandler.close() line: 175
HiveMapFunctionResultList.closeRecordProcessor() line: 57
HiveBaseFunctionResultList$ResultIterator.hasNext() line: 122
Wrappers$JIteratorWrapper<A>.hasNext() line: 41
Iterator$class.foreach(Iterator, Function1) line: 727
Wrappers$JIteratorWrapper<A>(AbstractIterator<A>).foreach(Function1<A,U>) line: 1157
RDD$$anonfun$foreach$1.apply(Iterator<T>) line: 760
RDD$$anonfun$foreach$1.apply(Object) line: 760
SparkContext$$anonfun$runJob$3.apply(TaskContext, Iterator<T>) line: 1118
SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
ResultTask<T,U>.runTask(TaskContext) line: 61
ResultTask<T,U>(Task<T>).run(long) line: 56
Executor$TaskRunner.run() line: 182
ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
ThreadPoolExecutor$Worker.run() line: 615
Thread.run() line: 745


  was:
The 4 tests
bucketsortoptimize_insert_2
bucketsortoptimize_insert_4
bucketsortoptimize_insert_7
bucketsortoptimize_insert_8

all fail with the same NPE related in SMBMapJoinOperator:

order object is null in SMBMapJoinOperator:
// fetch the first group for all small table aliases
for (byte pos = 0; pos < order.length; pos++) {
if (pos != posBigTable)
{ fetchNextGroup(pos); }

Daemon Thread [Executor task launch worker-3] (Suspended (exception NullPointerException))
SMBMapJoinOperator.processOp(Object, int) line: 258
FilterOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
FilterOperator.processOp(Object, int) line: 137
TableScanOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
TableScanOperator.processOp(Object, int) line: 95
MapOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
MapOperator.process(Writable) line: 536
SparkMapRecordHandler.processRow(Object, Object) line: 139
HiveMapFunctionResultList.processNextRecord(Tuple2<BytesWritable,BytesWritable>) line: 47
HiveMapFunctionResultList.processNextRecord(Object) line: 28
HiveBaseFunctionResultList$ResultIterator.hasNext() line: 108
Wrappers$JIteratorWrapper<A>.hasNext() line: 41
Iterator$class.foreach(Iterator, Function1) line: 727
Wrappers$JIteratorWrapper<A>(AbstractIterator<A>).foreach(Function1<A,U>) line: 1157
RDD$$anonfun$foreach$1.apply(Iterator<T>) line: 760
RDD$$anonfun$foreach$1.apply(Object) line: 760
SparkContext$$anonfun$runJob$3.apply(TaskContext, Iterator<T>) line: 1118
SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
ResultTask<T,U>.runTask(TaskContext) line: 61
ResultTask<T,U>(Task<T>).run(long) line: 56
Executor$TaskRunner.run() line: 182
ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
ThreadPoolExecutor$Worker.run() line: 615
Thread.run() line: 745

There is also a NPE in the FileSinkOperator: the FileSystem object fs is null:
// in recent hadoop versions, use deleteOnExit to clean tmp files.
if (isNativeTable) {
autoDelete = fs.deleteOnExit(fsp.outPaths[0]);

Daemon Thread [Executor task launch worker-1] (Suspended (exception NullPointerException))
FileSinkOperator.createBucketFiles(FileSinkOperator$FSPaths) line: 495
FileSinkOperator.closeOp(boolean) line: 925
FileSinkOperator(Operator<T>).close(boolean) line: 582
SelectOperator(Operator<T>).close(boolean) line: 594
SMBMapJoinOperator(Operator<T>).close(boolean) line: 594
DummyStoreOperator(Operator<T>).close(boolean) line: 594
FilterOperator(Operator<T>).close(boolean) line: 594
TableScanOperator(Operator<T>).close(boolean) line: 594
MapOperator(Operator<T>).close(boolean) line: 594
SparkMapRecordHandler.close() line: 175
HiveMapFunctionResultList.closeRecordProcessor() line: 57
HiveBaseFunctionResultList$ResultIterator.hasNext() line: 122
Wrappers$JIteratorWrapper<A>.hasNext() line: 41
Iterator$class.foreach(Iterator, Function1) line: 727
Wrappers$JIteratorWrapper<A>(AbstractIterator<A>).foreach(Function1<A,U>) line: 1157
RDD$$anonfun$foreach$1.apply(Iterator<T>) line: 760
RDD$$anonfun$foreach$1.apply(Object) line: 760
SparkContext$$anonfun$runJob$3.apply(TaskContext, Iterator<T>) line: 1118
SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
ResultTask<T,U>.runTask(TaskContext) line: 61
ResultTask<T,U>(Task<T>).run(long) line: 56
Executor$TaskRunner.run() line: 182
ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
ThreadPoolExecutor$Worker.run() line: 615
Thread.run() line: 745



> UT: fix bucketsort_insert tests - related to SMBMapJoinOperator
> ---------------------------------------------------------------
>
>                 Key: HIVE-8508
>                 URL: https://issues.apache.org/jira/browse/HIVE-8508
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Thomas Friedrich
>
> The 4 tests
> bucketsortoptimize_insert_2
> bucketsortoptimize_insert_4
> bucketsortoptimize_insert_6
> bucketsortoptimize_insert_7
> bucketsortoptimize_insert_8
> all fail with the same NPE related in SMBMapJoinOperator:
> order object is null in SMBMapJoinOperator:
> // fetch the first group for all small table aliases
> for (byte pos = 0; pos < order.length; pos++) {
> if (pos != posBigTable)
> { fetchNextGroup(pos); }
> Daemon Thread [Executor task launch worker-3] (Suspended (exception NullPointerException))
> SMBMapJoinOperator.processOp(Object, int) line: 258
> FilterOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
> FilterOperator.processOp(Object, int) line: 137
> TableScanOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
> TableScanOperator.processOp(Object, int) line: 95
> MapOperator(Operator<T>).forward(Object, ObjectInspector) line: 799
> MapOperator.process(Writable) line: 536
> SparkMapRecordHandler.processRow(Object, Object) line: 139
> HiveMapFunctionResultList.processNextRecord(Tuple2<BytesWritable,BytesWritable>) line: 47
> HiveMapFunctionResultList.processNextRecord(Object) line: 28
> HiveBaseFunctionResultList$ResultIterator.hasNext() line: 108
> Wrappers$JIteratorWrapper<A>.hasNext() line: 41
> Iterator$class.foreach(Iterator, Function1) line: 727
> Wrappers$JIteratorWrapper<A>(AbstractIterator<A>).foreach(Function1<A,U>) line: 1157
> RDD$$anonfun$foreach$1.apply(Iterator<T>) line: 760
> RDD$$anonfun$foreach$1.apply(Object) line: 760
> SparkContext$$anonfun$runJob$3.apply(TaskContext, Iterator<T>) line: 1118
> SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
> ResultTask<T,U>.runTask(TaskContext) line: 61
> ResultTask<T,U>(Task<T>).run(long) line: 56
> Executor$TaskRunner.run() line: 182
> ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
> ThreadPoolExecutor$Worker.run() line: 615
> Thread.run() line: 745
> There is also a NPE in the FileSinkOperator: the FileSystem object fs is null:
> // in recent hadoop versions, use deleteOnExit to clean tmp files.
> if (isNativeTable) {
> autoDelete = fs.deleteOnExit(fsp.outPaths[0]);
> Daemon Thread [Executor task launch worker-1] (Suspended (exception NullPointerException))
> FileSinkOperator.createBucketFiles(FileSinkOperator$FSPaths) line: 495
> FileSinkOperator.closeOp(boolean) line: 925
> FileSinkOperator(Operator<T>).close(boolean) line: 582
> SelectOperator(Operator<T>).close(boolean) line: 594
> SMBMapJoinOperator(Operator<T>).close(boolean) line: 594
> DummyStoreOperator(Operator<T>).close(boolean) line: 594
> FilterOperator(Operator<T>).close(boolean) line: 594
> TableScanOperator(Operator<T>).close(boolean) line: 594
> MapOperator(Operator<T>).close(boolean) line: 594
> SparkMapRecordHandler.close() line: 175
> HiveMapFunctionResultList.closeRecordProcessor() line: 57
> HiveBaseFunctionResultList$ResultIterator.hasNext() line: 122
> Wrappers$JIteratorWrapper<A>.hasNext() line: 41
> Iterator$class.foreach(Iterator, Function1) line: 727
> Wrappers$JIteratorWrapper<A>(AbstractIterator<A>).foreach(Function1<A,U>) line: 1157
> RDD$$anonfun$foreach$1.apply(Iterator<T>) line: 760
> RDD$$anonfun$foreach$1.apply(Object) line: 760
> SparkContext$$anonfun$runJob$3.apply(TaskContext, Iterator<T>) line: 1118
> SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
> ResultTask<T,U>.runTask(TaskContext) line: 61
> ResultTask<T,U>(Task<T>).run(long) line: 56
> Executor$TaskRunner.run() line: 182
> ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
> ThreadPoolExecutor$Worker.run() line: 615
> Thread.run() line: 745



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)