You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by cenyuhai <gi...@git.apache.org> on 2017/03/19 15:15:03 UTC

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-727][WIP] add hive integ...

GitHub user cenyuhai opened a pull request:

    https://github.com/apache/incubator-carbondata/pull/672

    [CARBONDATA-727][WIP] add hive integration for carbon

    add support for carbondata

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/cenyuhai/incubator-carbondata CARBONDATA-727

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/incubator-carbondata/pull/672.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #672
    
----
commit 3a578a1b33720674d7e7adae194af11a6e7fb9de
Author: cenyuhai <ce...@didichuxing.com>
Date:   2017-03-12T15:17:40Z

    add hive integration for carbon
    
    add hive integration to assembly
    
    alter CarbonInputFormat to implement mapred.InputFormat
    
    add a hive serde for carbon
    
    add hive integration to assembly
    
    fix error in getQueryModel
    
    add debug info
    
    add debug info
    
    add debug info
    
    add debug info
    
    fix error in CarbonRecordReader
    
    use ArrayWritable for CarbonRecordReader
    
    fix error in initializing CarbonRecordReader
    
    fix error in initializing CarbonRecordReader
    
    fix error in initializing CarbonRecordReader
    
    fix error in initializing CarbonRecordReader
    
    \u4fee\u6539InputFormat\u7684\u8fd4\u56de\u503c
    
    \u628a\u9700\u8981\u67e5\u7684\u5217\u8bbe\u7f6e\u5230carbon\u91cc\u53bb
    
    fix nullpoint exception
    
    add catalyst depedency
    
    add catalyst depedency
    
    add catalyst depedency
    
    fix error in intializing carbon error
    
    add a new hive carbon recordreader
    
    \u6dfb\u52a0\u628aobject\u5e8f\u5217\u5316\u6210ArrayWritable\u7684\u4ee3\u7801
    
    short/int\u7b49\u6570\u636e\u7c7b\u578b\u5728Carbon\u5f53\u4e2d\u5b9e\u9645\u4e0a\u662fLong\u7c7b\u578b
    
    use right inspector
    
    use right inspector
    
    fix long can't cast int error
    
    fix decimal cast error
    
    column size is not equal to column type
    
    column size is not equal to column type
    
    column size is not equal to column type
    
    column size is not equal to column type
    
    fix ObjInspector error
    
    fix ObjInspector error
    
    fix ObjInspector error
    
    add a new hive input split
    
    should not combine path
    
    add support for timestamp
    
    clean codes
    
    remove unused codes
    
    support Date and TimeStamp type

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by cenyuhai <gi...@git.apache.org>.
Github user cenyuhai closed the pull request at:

    https://github.com/apache/incubator-carbondata/pull/672


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1263/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107863523
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonArrayInspector.java ---
    @@ -0,0 +1,191 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.util.ArrayList;
    +import java.util.List;
    +
    +import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.SettableListObjectInspector;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.Writable;
    +
    +/**
    + * The CarbonHiveArrayInspector will inspect an ArrayWritable, considering it as an Hive array.
    + * It can also inspect a List if Hive decides to inspect the result of an inspection.
    + */
    +public class CarbonArrayInspector implements SettableListObjectInspector {
    +
    +  ObjectInspector arrayElementInspector;
    +
    +  public CarbonArrayInspector(final ObjectInspector arrayElementInspector) {
    +    this.arrayElementInspector = arrayElementInspector;
    +  }
    +
    +  @Override
    +  public String getTypeName() {
    +    return "array<" + arrayElementInspector.getTypeName() + ">";
    +  }
    +
    +  @Override
    +  public Category getCategory() {
    +    return Category.LIST;
    +  }
    +
    +  @Override
    +  public ObjectInspector getListElementObjectInspector() {
    +    return arrayElementInspector;
    +  }
    +
    +  @Override
    +  public Object getListElement(final Object data, final int index) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      if (index >= 0 && index < ((ArrayWritable) subObj).get().length) {
    +        return ((ArrayWritable) subObj).get()[index];
    +      } else {
    +        return null;
    +      }
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public int getListLength(final Object data) {
    +    if (data == null) {
    +      return -1;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return -1;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return 0;
    +      }
    +
    +      return ((ArrayWritable) subObj).get().length;
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public List<?> getList(final Object data) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      final Writable[] array = ((ArrayWritable) subObj).get();
    +      final List<Writable> list = new ArrayList<Writable>();
    +
    +      for (final Writable obj : array) {
    +        list.add(obj);
    +      }
    +
    +      return list;
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public Object create(final int size) {
    +    final ArrayList<Object> result = new ArrayList<Object>(size);
    +    for (int i = 0; i < size; ++i) {
    +      result.add(null);
    +    }
    +    return result;
    +  }
    +
    +  @Override
    +  public Object set(final Object list, final int index, final Object element) {
    +    final ArrayList l = (ArrayList) list;
    +    l.set(index, element);
    +    return list;
    +  }
    +
    +  @Override
    +  public Object resize(final Object list, final int newSize) {
    +    final ArrayList l = (ArrayList) list;
    --- End diff --
    
    unchecked type


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by cenyuhai <gi...@git.apache.org>.
Github user cenyuhai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r108030455
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java ---
    @@ -0,0 +1,99 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
    +import org.apache.carbondata.core.scan.expression.Expression;
    +import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
    +import org.apache.carbondata.core.scan.model.CarbonQueryPlan;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.hadoop.CarbonInputFormat;
    +import org.apache.carbondata.hadoop.CarbonInputSplit;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.mapred.InputFormat;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordReader;
    +import org.apache.hadoop.mapred.Reporter;
    +import org.apache.hadoop.mapreduce.Job;
    +
    +
    +public class MapredCarbonInputFormat extends CarbonInputFormat<ArrayWritable>
    +    implements InputFormat<Void, ArrayWritable>, CombineHiveInputFormat.AvoidSplitCombination {
    +
    +  @Override
    +  public InputSplit[] getSplits(JobConf jobConf, int numSplits) throws IOException {
    +    org.apache.hadoop.mapreduce.JobContext jobContext = Job.getInstance(jobConf);
    +    List<org.apache.hadoop.mapreduce.InputSplit> splitList = super.getSplits(jobContext);
    --- End diff --
    
    Are invalid segments are only useful for CarbonMultiBlockSplit?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-727][WIP] add hive integ...

Posted by chenliang613 <gi...@git.apache.org>.
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107839323
  
    --- Diff: pom.xml ---
    @@ -381,6 +389,15 @@
           </modules>
         </profile>
         <profile>
    +      <id>hive-1.2.1</id>
    --- End diff --
    
    suggest changing to hive-1.2 for <id>


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by cenyuhai <gi...@git.apache.org>.
Github user cenyuhai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r108030714
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonOutputFormat.java ---
    @@ -0,0 +1,49 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +
    +import java.io.IOException;
    +import java.util.Properties;
    +
    +import org.apache.hadoop.fs.FileSystem;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.exec.FileSinkOperator;
    +import org.apache.hadoop.hive.ql.io.HiveOutputFormat;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.FileOutputFormat;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordWriter;
    +import org.apache.hadoop.util.Progressable;
    +
    +
    +public class MapredCarbonOutputFormat<T> extends FileOutputFormat<Void, T>
    --- End diff --
    
    MapredCarbonOutputFormat also needs to implements HiveOutputFormat


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1295/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1227/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-815] add hive integration for c...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1333/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by cenyuhai <gi...@git.apache.org>.
Github user cenyuhai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r108026227
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonOutputFormat.java ---
    @@ -0,0 +1,49 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +
    +import java.io.IOException;
    +import java.util.Properties;
    +
    +import org.apache.hadoop.fs.FileSystem;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.exec.FileSinkOperator;
    +import org.apache.hadoop.hive.ql.io.HiveOutputFormat;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.FileOutputFormat;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordWriter;
    +import org.apache.hadoop.util.Progressable;
    +
    +
    +public class MapredCarbonOutputFormat<T> extends FileOutputFormat<Void, T>
    --- End diff --
    
    MapredCarbonOutputFormat is only used for creating table.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-815] add hive integration for c...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1376/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107875522
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonHiveRecordReader.java ---
    @@ -0,0 +1,249 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +
    +import java.io.IOException;
    +import java.sql.Date;
    +import java.sql.Timestamp;
    +import java.util.ArrayList;
    +import java.util.Arrays;
    +import java.util.Iterator;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.datastore.block.TableBlockInfo;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.hadoop.CarbonRecordReader;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.hive.common.type.HiveDecimal;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.hive.serde2.SerDeException;
    +import org.apache.hadoop.hive.serde2.io.DateWritable;
    +import org.apache.hadoop.hive.serde2.io.DoubleWritable;
    +import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
    +import org.apache.hadoop.hive.serde2.io.ShortWritable;
    +import org.apache.hadoop.hive.serde2.io.TimestampWritable;
    +import org.apache.hadoop.hive.serde2.objectinspector.*;
    +import org.apache.hadoop.hive.serde2.typeinfo.StructTypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.LongWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +
    +public class CarbonHiveRecordReader extends CarbonRecordReader<ArrayWritable>
    --- End diff --
    
    CarbonRecordReader is for MRv2, CarbonHiveRecordReader is for MRv1.
    CarbonHiveRecordReader shouldn't extend from CarbonRecordReader.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1269/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107863410
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonArrayInspector.java ---
    @@ -0,0 +1,191 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.util.ArrayList;
    +import java.util.List;
    +
    +import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.SettableListObjectInspector;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.Writable;
    +
    +/**
    + * The CarbonHiveArrayInspector will inspect an ArrayWritable, considering it as an Hive array.
    + * It can also inspect a List if Hive decides to inspect the result of an inspection.
    + */
    +public class CarbonArrayInspector implements SettableListObjectInspector {
    +
    +  ObjectInspector arrayElementInspector;
    +
    +  public CarbonArrayInspector(final ObjectInspector arrayElementInspector) {
    +    this.arrayElementInspector = arrayElementInspector;
    +  }
    +
    +  @Override
    +  public String getTypeName() {
    +    return "array<" + arrayElementInspector.getTypeName() + ">";
    +  }
    +
    +  @Override
    +  public Category getCategory() {
    +    return Category.LIST;
    +  }
    +
    +  @Override
    +  public ObjectInspector getListElementObjectInspector() {
    +    return arrayElementInspector;
    +  }
    +
    +  @Override
    +  public Object getListElement(final Object data, final int index) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      if (index >= 0 && index < ((ArrayWritable) subObj).get().length) {
    +        return ((ArrayWritable) subObj).get()[index];
    +      } else {
    +        return null;
    +      }
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public int getListLength(final Object data) {
    +    if (data == null) {
    +      return -1;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return -1;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return 0;
    +      }
    +
    +      return ((ArrayWritable) subObj).get().length;
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public List<?> getList(final Object data) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      final Writable[] array = ((ArrayWritable) subObj).get();
    +      final List<Writable> list = new ArrayList<Writable>();
    +
    +      for (final Writable obj : array) {
    +        list.add(obj);
    +      }
    +
    +      return list;
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public Object create(final int size) {
    +    final ArrayList<Object> result = new ArrayList<Object>(size);
    --- End diff --
    
     use Arrays.asList(new Object[size]);


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1303/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-727][WIP] add hive integ...

Posted by chenliang613 <gi...@git.apache.org>.
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107606797
  
    --- Diff: dev/java-code-format-template.xml ---
    @@ -34,8 +34,8 @@
       <option name="IMPORT_LAYOUT_TABLE">
         <value>
           <emptyLine />
    -      <package name="javax" withSubpackages="true" static="false" />
           <package name="java" withSubpackages="true" static="false" />
    +      <package name="javax" withSubpackages="true" static="false" />
    --- End diff --
    
    Can you explain ,why change the order ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107862824
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonArrayInspector.java ---
    @@ -0,0 +1,191 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.util.ArrayList;
    +import java.util.List;
    +
    +import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.SettableListObjectInspector;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.Writable;
    +
    +/**
    + * The CarbonHiveArrayInspector will inspect an ArrayWritable, considering it as an Hive array.
    + * It can also inspect a List if Hive decides to inspect the result of an inspection.
    + */
    +public class CarbonArrayInspector implements SettableListObjectInspector {
    +
    +  ObjectInspector arrayElementInspector;
    +
    +  public CarbonArrayInspector(final ObjectInspector arrayElementInspector) {
    +    this.arrayElementInspector = arrayElementInspector;
    +  }
    +
    +  @Override
    +  public String getTypeName() {
    +    return "array<" + arrayElementInspector.getTypeName() + ">";
    +  }
    +
    +  @Override
    +  public Category getCategory() {
    +    return Category.LIST;
    +  }
    +
    +  @Override
    +  public ObjectInspector getListElementObjectInspector() {
    +    return arrayElementInspector;
    +  }
    +
    +  @Override
    +  public Object getListElement(final Object data, final int index) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      if (index >= 0 && index < ((ArrayWritable) subObj).get().length) {
    +        return ((ArrayWritable) subObj).get()[index];
    +      } else {
    +        return null;
    +      }
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public int getListLength(final Object data) {
    +    if (data == null) {
    +      return -1;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return -1;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return 0;
    +      }
    +
    +      return ((ArrayWritable) subObj).get().length;
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public List<?> getList(final Object data) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      final Writable[] array = ((ArrayWritable) subObj).get();
    +      final List<Writable> list = new ArrayList<Writable>();
    +
    +      for (final Writable obj : array) {
    --- End diff --
    
    Better to use Arrays.asList(array)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107872055
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonHiveRecordReader.java ---
    @@ -0,0 +1,249 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +
    +import java.io.IOException;
    +import java.sql.Date;
    +import java.sql.Timestamp;
    +import java.util.ArrayList;
    +import java.util.Arrays;
    +import java.util.Iterator;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.datastore.block.TableBlockInfo;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.hadoop.CarbonRecordReader;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.hive.common.type.HiveDecimal;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.hive.serde2.SerDeException;
    +import org.apache.hadoop.hive.serde2.io.DateWritable;
    +import org.apache.hadoop.hive.serde2.io.DoubleWritable;
    +import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
    +import org.apache.hadoop.hive.serde2.io.ShortWritable;
    +import org.apache.hadoop.hive.serde2.io.TimestampWritable;
    +import org.apache.hadoop.hive.serde2.objectinspector.*;
    +import org.apache.hadoop.hive.serde2.typeinfo.StructTypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.LongWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +
    +public class CarbonHiveRecordReader extends CarbonRecordReader<ArrayWritable>
    +    implements org.apache.hadoop.mapred.RecordReader<Void, ArrayWritable> {
    +
    +  ArrayWritable valueObj = null;
    --- End diff --
    
    add private


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107890848
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonOutputFormat.java ---
    @@ -0,0 +1,49 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +
    +import java.io.IOException;
    +import java.util.Properties;
    +
    +import org.apache.hadoop.fs.FileSystem;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.exec.FileSinkOperator;
    +import org.apache.hadoop.hive.ql.io.HiveOutputFormat;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.FileOutputFormat;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordWriter;
    +import org.apache.hadoop.util.Progressable;
    +
    +
    +public class MapredCarbonOutputFormat<T> extends FileOutputFormat<Void, T>
    --- End diff --
    
    Is same with CarbonTableOutputFormat?
    
    So we only support reading carbondata table in hive.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107863922
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonHiveInputSplit.java ---
    @@ -0,0 +1,290 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.io.DataInput;
    +import java.io.DataOutput;
    +import java.io.IOException;
    +import java.io.Serializable;
    +import java.util.ArrayList;
    +import java.util.HashMap;
    +import java.util.List;
    +import java.util.Map;
    +
    +import org.apache.carbondata.core.constants.CarbonCommonConstants;
    +import org.apache.carbondata.core.datastore.block.BlockletInfos;
    +import org.apache.carbondata.core.datastore.block.Distributable;
    +import org.apache.carbondata.core.datastore.block.TableBlockInfo;
    +import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
    +import org.apache.carbondata.core.mutate.UpdateVO;
    +import org.apache.carbondata.core.util.CarbonProperties;
    +import org.apache.carbondata.core.util.path.CarbonTablePath;
    +import org.apache.carbondata.hadoop.internal.index.Block;
    +
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.FileSplit;
    +
    +public class CarbonHiveInputSplit extends FileSplit
    +    implements Distributable, Serializable, Writable, Block {
    +
    +  private static final long serialVersionUID = 3520344046772190208L;
    +  public String taskId;
    --- End diff --
    
    use private


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-815] add hive integration for c...

Posted by chenliang613 <gi...@git.apache.org>.
Github user chenliang613 commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    @cenyuhai  can you raise this PR to branch hive.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r108028962
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonOutputFormat.java ---
    @@ -0,0 +1,49 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +
    +import java.io.IOException;
    +import java.util.Properties;
    +
    +import org.apache.hadoop.fs.FileSystem;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.exec.FileSinkOperator;
    +import org.apache.hadoop.hive.ql.io.HiveOutputFormat;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.FileOutputFormat;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordWriter;
    +import org.apache.hadoop.util.Progressable;
    +
    +
    +public class MapredCarbonOutputFormat<T> extends FileOutputFormat<Void, T>
    --- End diff --
    
    Can directly use CarbonTableOutputFormat instead of MapredCarbonOutputFormat.java 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1301/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107881262
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonHiveRecordReader.java ---
    @@ -0,0 +1,249 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +
    +import java.io.IOException;
    +import java.sql.Date;
    +import java.sql.Timestamp;
    +import java.util.ArrayList;
    +import java.util.Arrays;
    +import java.util.Iterator;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.datastore.block.TableBlockInfo;
    +import org.apache.carbondata.core.scan.executor.exception.QueryExecutionException;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator;
    +import org.apache.carbondata.hadoop.CarbonRecordReader;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.hive.common.type.HiveDecimal;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.hive.serde2.SerDeException;
    +import org.apache.hadoop.hive.serde2.io.DateWritable;
    +import org.apache.hadoop.hive.serde2.io.DoubleWritable;
    +import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
    +import org.apache.hadoop.hive.serde2.io.ShortWritable;
    +import org.apache.hadoop.hive.serde2.io.TimestampWritable;
    +import org.apache.hadoop.hive.serde2.objectinspector.*;
    +import org.apache.hadoop.hive.serde2.typeinfo.StructTypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.LongWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +
    +public class CarbonHiveRecordReader extends CarbonRecordReader<ArrayWritable>
    +    implements org.apache.hadoop.mapred.RecordReader<Void, ArrayWritable> {
    +
    +  ArrayWritable valueObj = null;
    +  private CarbonObjectInspector objInspector;
    +
    +  public CarbonHiveRecordReader(QueryModel queryModel, CarbonReadSupport<ArrayWritable> readSupport,
    +                                InputSplit inputSplit, JobConf jobConf) throws IOException {
    +    super(queryModel, readSupport);
    +    initialize(inputSplit, jobConf);
    +  }
    +
    +  public void initialize(InputSplit inputSplit, Configuration conf) throws IOException {
    +    // The input split can contain single HDFS block or multiple blocks, so firstly get all the
    +    // blocks and then set them in the query model.
    +    List<CarbonHiveInputSplit> splitList;
    +    if (inputSplit instanceof CarbonHiveInputSplit) {
    +      splitList = new ArrayList<>(1);
    +      splitList.add((CarbonHiveInputSplit) inputSplit);
    +    } else {
    +      throw new RuntimeException("unsupported input split type: " + inputSplit);
    +    }
    +    List<TableBlockInfo> tableBlockInfoList = CarbonHiveInputSplit.createBlocks(splitList);
    +    queryModel.setTableBlockInfos(tableBlockInfoList);
    +    readSupport.initialize(queryModel.getProjectionColumns(),
    +        queryModel.getAbsoluteTableIdentifier());
    +    try {
    +      carbonIterator = new ChunkRowIterator(queryExecutor.execute(queryModel));
    +    } catch (QueryExecutionException e) {
    +      throw new IOException(e.getMessage(), e.getCause());
    +    }
    +    if (valueObj == null) {
    +      valueObj = new ArrayWritable(Writable.class,
    +          new Writable[queryModel.getProjectionColumns().length]);
    +    }
    +
    +    final TypeInfo rowTypeInfo;
    +    final List<String> columnNames;
    +    List<TypeInfo> columnTypes;
    +    // Get column names and sort order
    +    final String columnNameProperty = conf.get("hive.io.file.readcolumn.names");
    +    final String columnTypeProperty = conf.get(serdeConstants.LIST_COLUMN_TYPES);
    +
    +    if (columnNameProperty.length() == 0) {
    +      columnNames = new ArrayList<String>();
    +    } else {
    +      columnNames = Arrays.asList(columnNameProperty.split(","));
    +    }
    +    if (columnTypeProperty.length() == 0) {
    +      columnTypes = new ArrayList<TypeInfo>();
    +    } else {
    +      columnTypes = TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
    +    }
    +    columnTypes = columnTypes.subList(0, columnNames.size());
    +    // Create row related objects
    +    rowTypeInfo = TypeInfoFactory.getStructTypeInfo(columnNames, columnTypes);
    +    this.objInspector = new CarbonObjectInspector((StructTypeInfo) rowTypeInfo);
    +  }
    +
    +  @Override
    +  public boolean next(Void aVoid, ArrayWritable value) throws IOException {
    +    if (carbonIterator.hasNext()) {
    +      Object obj = readSupport.readRow(carbonIterator.next());
    +      ArrayWritable tmpValue = null;
    +      try {
    +        tmpValue = createArrayWritable(obj);
    +      } catch (SerDeException se) {
    +        throw new IOException(se.getMessage(), se.getCause());
    +      }
    +
    +      if (valueObj != tmpValue) {
    --- End diff --
    
    The result of this condition is always true.
    Do you want to check whether all columns are null or not.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1299/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1292/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107886267
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonHiveSerDe.java ---
    @@ -0,0 +1,232 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.util.ArrayList;
    +import java.util.Arrays;
    +import java.util.Iterator;
    +import java.util.List;
    +import java.util.Properties;
    +import javax.annotation.Nullable;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.hive.serde.serdeConstants;
    +import org.apache.hadoop.hive.serde2.AbstractSerDe;
    +import org.apache.hadoop.hive.serde2.SerDeException;
    +import org.apache.hadoop.hive.serde2.SerDeSpec;
    +import org.apache.hadoop.hive.serde2.SerDeStats;
    +import org.apache.hadoop.hive.serde2.io.DoubleWritable;
    +import org.apache.hadoop.hive.serde2.io.ShortWritable;
    +import org.apache.hadoop.hive.serde2.objectinspector.ListObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.StructField;
    +import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.DateObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.HiveDecimalObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.IntObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.LongObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.primitive.TimestampObjectInspector;
    +import org.apache.hadoop.hive.serde2.typeinfo.StructTypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
    +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.LongWritable;
    +import org.apache.hadoop.io.Writable;
    +
    +
    +/**
    + * A serde class for Carbondata.
    + * It transparently passes the object to/from the Carbon file reader/writer.
    + */
    +@SerDeSpec(schemaProps = {serdeConstants.LIST_COLUMNS, serdeConstants.LIST_COLUMN_TYPES})
    +public class CarbonHiveSerDe extends AbstractSerDe {
    +  private SerDeStats stats;
    +  private ObjectInspector objInspector;
    +
    +  private enum LAST_OPERATION {
    +    SERIALIZE,
    +    DESERIALIZE,
    +    UNKNOWN
    +  }
    +
    +  private LAST_OPERATION status;
    +  private long serializedSize;
    +  private long deserializedSize;
    +
    +  public CarbonHiveSerDe() {
    +    stats = new SerDeStats();
    +  }
    +
    +  @Override
    +  public void initialize(@Nullable Configuration configuration, Properties tbl)
    +      throws SerDeException {
    +
    +    final TypeInfo rowTypeInfo;
    +    final List<String> columnNames;
    +    final List<TypeInfo> columnTypes;
    +    // Get column names and sort order
    +    final String columnNameProperty = tbl.getProperty(serdeConstants.LIST_COLUMNS);
    +    final String columnTypeProperty = tbl.getProperty(serdeConstants.LIST_COLUMN_TYPES);
    +
    +    if (columnNameProperty.length() == 0) {
    +      columnNames = new ArrayList<String>();
    +    } else {
    +      columnNames = Arrays.asList(columnNameProperty.split(","));
    +    }
    +    if (columnTypeProperty.length() == 0) {
    +      columnTypes = new ArrayList<TypeInfo>();
    +    } else {
    +      columnTypes = TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
    +    }
    +    // Create row related objects
    +    rowTypeInfo = TypeInfoFactory.getStructTypeInfo(columnNames, columnTypes);
    +    this.objInspector = new CarbonObjectInspector((StructTypeInfo) rowTypeInfo);
    +
    +    // Stats part
    +    serializedSize = 0;
    +    deserializedSize = 0;
    +    status = LAST_OPERATION.UNKNOWN;
    +  }
    +
    +  @Override
    +  public Class<? extends Writable> getSerializedClass() {
    +    return ArrayWritable.class;
    +  }
    +
    +  @Override
    +  public Writable serialize(Object obj, ObjectInspector objectInspector) throws SerDeException {
    +    if (!objInspector.getCategory().equals(ObjectInspector.Category.STRUCT)) {
    +      throw new SerDeException("Cannot serialize " + objInspector.getCategory()
    +        + ". Can only serialize a struct");
    +    }
    +    serializedSize += ((StructObjectInspector) objInspector).getAllStructFieldRefs().size();
    +    status = LAST_OPERATION.SERIALIZE;
    +    ArrayWritable serializeData = createStruct(obj, (StructObjectInspector) objInspector);
    +    return serializeData;
    --- End diff --
    
    return createStruct directly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1271/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by ravipesala <gi...@git.apache.org>.
Github user ravipesala commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    add to whitelist


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1245/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107893281
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java ---
    @@ -0,0 +1,99 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
    +import org.apache.carbondata.core.scan.expression.Expression;
    +import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
    +import org.apache.carbondata.core.scan.model.CarbonQueryPlan;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.hadoop.CarbonInputFormat;
    +import org.apache.carbondata.hadoop.CarbonInputSplit;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.mapred.InputFormat;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordReader;
    +import org.apache.hadoop.mapred.Reporter;
    +import org.apache.hadoop.mapreduce.Job;
    +
    +
    +public class MapredCarbonInputFormat extends CarbonInputFormat<ArrayWritable>
    +    implements InputFormat<Void, ArrayWritable>, CombineHiveInputFormat.AvoidSplitCombination {
    +
    +  @Override
    +  public InputSplit[] getSplits(JobConf jobConf, int numSplits) throws IOException {
    +    org.apache.hadoop.mapreduce.JobContext jobContext = Job.getInstance(jobConf);
    +    List<org.apache.hadoop.mapreduce.InputSplit> splitList = super.getSplits(jobContext);
    --- End diff --
    
    for hive, need remove InputSplit of Invalid Segments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-727][WIP] add hive integ...

Posted by cenyuhai <gi...@git.apache.org>.
Github user cenyuhai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107607409
  
    --- Diff: dev/java-code-format-template.xml ---
    @@ -34,8 +34,8 @@
       <option name="IMPORT_LAYOUT_TABLE">
         <value>
           <emptyLine />
    -      <package name="javax" withSubpackages="true" static="false" />
           <package name="java" withSubpackages="true" static="false" />
    +      <package name="javax" withSubpackages="true" static="false" />
    --- End diff --
    
    @chenliang613 @QiangCai Qiang Cai told me that it is wrong. So I change it  by the way


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by chenliang613 <gi...@git.apache.org>.
Github user chenliang613 commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    @cenyuhai  Thank you contributed this feature.
    Suggest creating a new profile for "integration/hive" module,  and let all hive related code decoupled from current modules,  let CI run normally first.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107841211
  
    --- Diff: dev/java-code-format-template.xml ---
    @@ -34,8 +34,8 @@
       <option name="IMPORT_LAYOUT_TABLE">
         <value>
           <emptyLine />
    -      <package name="javax" withSubpackages="true" static="false" />
           <package name="java" withSubpackages="true" static="false" />
    +      <package name="javax" withSubpackages="true" static="false" />
    --- End diff --
    
    Yes. javax package should be after java package.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by chenliang613 <gi...@git.apache.org>.
Github user chenliang613 commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    @cenyuhai   please change title to :  [CARBONDATA-815] add hive integration for carbon


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107863484
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonArrayInspector.java ---
    @@ -0,0 +1,191 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.util.ArrayList;
    +import java.util.List;
    +
    +import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.SettableListObjectInspector;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.Writable;
    +
    +/**
    + * The CarbonHiveArrayInspector will inspect an ArrayWritable, considering it as an Hive array.
    + * It can also inspect a List if Hive decides to inspect the result of an inspection.
    + */
    +public class CarbonArrayInspector implements SettableListObjectInspector {
    +
    +  ObjectInspector arrayElementInspector;
    +
    +  public CarbonArrayInspector(final ObjectInspector arrayElementInspector) {
    +    this.arrayElementInspector = arrayElementInspector;
    +  }
    +
    +  @Override
    +  public String getTypeName() {
    +    return "array<" + arrayElementInspector.getTypeName() + ">";
    +  }
    +
    +  @Override
    +  public Category getCategory() {
    +    return Category.LIST;
    +  }
    +
    +  @Override
    +  public ObjectInspector getListElementObjectInspector() {
    +    return arrayElementInspector;
    +  }
    +
    +  @Override
    +  public Object getListElement(final Object data, final int index) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      if (index >= 0 && index < ((ArrayWritable) subObj).get().length) {
    +        return ((ArrayWritable) subObj).get()[index];
    +      } else {
    +        return null;
    +      }
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public int getListLength(final Object data) {
    +    if (data == null) {
    +      return -1;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return -1;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return 0;
    +      }
    +
    +      return ((ArrayWritable) subObj).get().length;
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public List<?> getList(final Object data) {
    +    if (data == null) {
    +      return null;
    +    }
    +
    +    if (data instanceof ArrayWritable) {
    +      final Writable[] listContainer = ((ArrayWritable) data).get();
    +
    +      if (listContainer == null || listContainer.length == 0) {
    +        return null;
    +      }
    +
    +      final Writable subObj = listContainer[0];
    +
    +      if (subObj == null) {
    +        return null;
    +      }
    +
    +      final Writable[] array = ((ArrayWritable) subObj).get();
    +      final List<Writable> list = new ArrayList<Writable>();
    +
    +      for (final Writable obj : array) {
    +        list.add(obj);
    +      }
    +
    +      return list;
    +    }
    +
    +    throw new UnsupportedOperationException("Cannot inspect "
    +      + data.getClass().getCanonicalName());
    +  }
    +
    +  @Override
    +  public Object create(final int size) {
    +    final ArrayList<Object> result = new ArrayList<Object>(size);
    +    for (int i = 0; i < size; ++i) {
    +      result.add(null);
    +    }
    +    return result;
    +  }
    +
    +  @Override
    +  public Object set(final Object list, final int index, final Object element) {
    +    final ArrayList l = (ArrayList) list;
    --- End diff --
    
    unchecked type


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1265/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1244/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1294/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107862312
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/CarbonArrayInspector.java ---
    @@ -0,0 +1,191 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.util.ArrayList;
    +import java.util.List;
    +
    +import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
    +import org.apache.hadoop.hive.serde2.objectinspector.SettableListObjectInspector;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.io.Writable;
    +
    +/**
    + * The CarbonHiveArrayInspector will inspect an ArrayWritable, considering it as an Hive array.
    + * It can also inspect a List if Hive decides to inspect the result of an inspection.
    + */
    +public class CarbonArrayInspector implements SettableListObjectInspector {
    +
    +  ObjectInspector arrayElementInspector;
    --- End diff --
    
    add private.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1249/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107894484
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java ---
    @@ -0,0 +1,99 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
    +import org.apache.carbondata.core.scan.expression.Expression;
    +import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
    +import org.apache.carbondata.core.scan.model.CarbonQueryPlan;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.hadoop.CarbonInputFormat;
    +import org.apache.carbondata.hadoop.CarbonInputSplit;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.mapred.InputFormat;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordReader;
    +import org.apache.hadoop.mapred.Reporter;
    +import org.apache.hadoop.mapreduce.Job;
    +
    +
    +public class MapredCarbonInputFormat extends CarbonInputFormat<ArrayWritable>
    +    implements InputFormat<Void, ArrayWritable>, CombineHiveInputFormat.AvoidSplitCombination {
    +
    +  @Override
    +  public InputSplit[] getSplits(JobConf jobConf, int numSplits) throws IOException {
    +    org.apache.hadoop.mapreduce.JobContext jobContext = Job.getInstance(jobConf);
    +    List<org.apache.hadoop.mapreduce.InputSplit> splitList = super.getSplits(jobContext);
    +    InputSplit[] splits = new InputSplit[splitList.size()];
    +    CarbonInputSplit split = null;
    +    for (int i = 0; i < splitList.size(); i++) {
    +      split = (CarbonInputSplit) splitList.get(i);
    +      splits[i] = new CarbonHiveInputSplit(split.getSegmentId(), split.getPath(),
    +          split.getStart(), split.getLength(), split.getLocations(),
    +          split.getNumberOfBlocklets(), split.getVersion(), split.getBlockStorageIdMap());
    +    }
    +    return splits;
    +  }
    +
    +  @Override
    +  public RecordReader<Void, ArrayWritable> getRecordReader(InputSplit inputSplit, JobConf jobConf,
    +                                                           Reporter reporter) throws IOException {
    +    QueryModel queryModel = getQueryModel(jobConf);
    +    CarbonReadSupport<ArrayWritable> readSupport = getReadSupportClass(jobConf);
    --- End diff --
    
    need decode all dictionary columns and direct-dictionary columns.
    Better to use SparkRowReadSupportImpl in spark1 module.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1298/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1248/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by QiangCai <gi...@git.apache.org>.
Github user QiangCai commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107873741
  
    --- Diff: integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java ---
    @@ -0,0 +1,99 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.carbondata.hive;
    +
    +import java.io.IOException;
    +import java.util.List;
    +
    +import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
    +import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
    +import org.apache.carbondata.core.scan.expression.Expression;
    +import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
    +import org.apache.carbondata.core.scan.model.CarbonQueryPlan;
    +import org.apache.carbondata.core.scan.model.QueryModel;
    +import org.apache.carbondata.hadoop.CarbonInputFormat;
    +import org.apache.carbondata.hadoop.CarbonInputSplit;
    +import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
    +import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
    +
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.fs.Path;
    +import org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
    +import org.apache.hadoop.io.ArrayWritable;
    +import org.apache.hadoop.mapred.InputFormat;
    +import org.apache.hadoop.mapred.InputSplit;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.RecordReader;
    +import org.apache.hadoop.mapred.Reporter;
    +import org.apache.hadoop.mapreduce.Job;
    +
    +
    +public class MapredCarbonInputFormat extends CarbonInputFormat<ArrayWritable>
    --- End diff --
    
    CarbonInputFormat is the implement of MRv2. MapredCarbonInputFormat is a implement of MRv1.
    So I think MapredCarbonInputFormat shouldn't extend from CarbonInputFormat.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-727][WIP] add hive integration ...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Failed  with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1264/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata pull request #672: [CARBONDATA-815] add hive integratio...

Posted by chenliang613 <gi...@git.apache.org>.
Github user chenliang613 commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/672#discussion_r107842840
  
    --- Diff: dev/java-code-format-template.xml ---
    @@ -34,8 +34,8 @@
       <option name="IMPORT_LAYOUT_TABLE">
         <value>
           <emptyLine />
    -      <package name="javax" withSubpackages="true" static="false" />
           <package name="java" withSubpackages="true" static="false" />
    +      <package name="javax" withSubpackages="true" static="false" />
    --- End diff --
    
    ok


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-carbondata issue #672: [CARBONDATA-815] add hive integration for c...

Posted by CarbonDataQA <gi...@git.apache.org>.
Github user CarbonDataQA commented on the issue:

    https://github.com/apache/incubator-carbondata/pull/672
  
    Build Success with Spark 1.6.2, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1320/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---