You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Biswajit Nayak (JIRA)" <ji...@apache.org> on 2016/02/15 08:49:18 UTC

[jira] [Updated] (SQOOP-2840) Sqoop Hcat Int partition Error

     [ https://issues.apache.org/jira/browse/SQOOP-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Biswajit Nayak updated SQOOP-2840:
----------------------------------
    Description: 
Hi All, 

I am trying to do a SQOOP export from hive( integer type partition) to mysql and fail with the following error. 

Versions:-

{code}

Hadoop :-  2.7.1
Hive      :-  1.2.0
Sqoop   :-  1.4.5

{code}

Table in Hive :-


hive> use default;
OK
Time taken: 0.028 seconds
hive> describe emp_details1;
OK
id                      int                                         
name                    string                                      
deg                     string                                      
dept                    string                                      
salary                  int                                         

# Partition Information      
# col_name              data_type               comment             

salary                  int                                         
Time taken: 0.125 seconds, Fetched: 10 row(s)
hive> 

hive> select * from emp_details1;
OK
1201    gopal           50000
1202    manisha         50000
1203    kalil           50000
1204    prasanth        50000
1205    kranthi         50000
1206    satish          50000
Time taken: 0.195 seconds, Fetched: 6 row(s)
hive> 

Conf added to Hive megastore site.xml


[alti-test-01@hdpnightly271-ci-91-services ~]$ grep -A5 -B2 -i "hive.metastore.integral.jdo.pushdown" /etc/hive-metastore/hive-site.xml 
    </property>
    <property>
        <name>hive.metastore.integral.jdo.pushdown</name>
        <value>TRUE</value>
    </property>

</configuration>
[alti-test-01@hdpnightly271-ci-91-services ~]$ 

The issue remains same


[alti-test-01@hdpnightly271-ci-91-services ~]$ /opt/sqoop-1.4.5/bin/sqoop export --connect jdbc:mysql://localhost:3306/test --username hive --password ********* --table employee --hcatalog-database default --hcatalog-table emp_details1
Warning: /opt/sqoop-1.4.5/bin/../../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /opt/sqoop-1.4.5/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /opt/sqoop-1.4.5/bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
16/02/12 08:04:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
16/02/12 08:04:00 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/02/12 08:04:00 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
16/02/12 08:04:00 INFO tool.CodeGenTool: Beginning code generation
16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop
Note: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/02/12 08:04:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.jar
16/02/12 08:04:07 INFO mapreduce.ExportJobBase: Beginning export of employee
16/02/12 08:04:08 INFO mapreduce.ExportJobBase: Configuring HCatalog for export job
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Configuring HCatalog specific details for job
16/02/12 08:04:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column names projected : [id, name, deg, salary, dept]
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column name - info map :
    id : [Type : 4,Precision : 11,Scale : 0]
    name : [Type : 12,Precision : 20,Scale : 0]
    deg : [Type : 12,Precision : 20,Scale : 0]
    salary : [Type : 4,Precision : 11,Scale : 0]
    dept : [Type : 12,Precision : 10,Scale : 0]

16/02/12 08:04:10 INFO hive.metastore: Trying to connect to metastore with URI thrift://hive-hdpnightly271-ci-91.test.altiscale.com:9083
16/02/12 08:04:10 INFO hive.metastore: Connected to metastore.
16/02/12 08:04:11 INFO hcat.SqoopHCatUtilities: HCatalog full table schema fields = [id, name, deg, dept, salary]
16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException running export job: java.io.IOException: The table provided default.emp_details1 uses unsupported  partitioning key type  for column salary : int.  Only string fields are allowed in partition columns in Catalog


The issue has discussed in [HIVE-2702] and suppose to be solved in hive 0.12 version. But I am hitting this issue in Hive 1.2.0 version. 

  was:
Hi All, 

I am trying to do a SQOOP export from hive( integer type partition) to mysql and fail with the following error. 

Versions:-

Hadoop :-  2.7.1
Hive      :-  1.2.0
Sqoop   :-  1.4.5

Table in Hive :-


hive> use default;
OK
Time taken: 0.028 seconds
hive> describe emp_details1;
OK
id                      int                                         
name                    string                                      
deg                     string                                      
dept                    string                                      
salary                  int                                         

# Partition Information      
# col_name              data_type               comment             

salary                  int                                         
Time taken: 0.125 seconds, Fetched: 10 row(s)
hive> 

hive> select * from emp_details1;
OK
1201    gopal           50000
1202    manisha         50000
1203    kalil           50000
1204    prasanth        50000
1205    kranthi         50000
1206    satish          50000
Time taken: 0.195 seconds, Fetched: 6 row(s)
hive> 

Conf added to Hive megastore site.xml


[alti-test-01@hdpnightly271-ci-91-services ~]$ grep -A5 -B2 -i "hive.metastore.integral.jdo.pushdown" /etc/hive-metastore/hive-site.xml 
    </property>
    <property>
        <name>hive.metastore.integral.jdo.pushdown</name>
        <value>TRUE</value>
    </property>

</configuration>
[alti-test-01@hdpnightly271-ci-91-services ~]$ 

The issue remains same


[alti-test-01@hdpnightly271-ci-91-services ~]$ /opt/sqoop-1.4.5/bin/sqoop export --connect jdbc:mysql://localhost:3306/test --username hive --password ********* --table employee --hcatalog-database default --hcatalog-table emp_details1
Warning: /opt/sqoop-1.4.5/bin/../../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /opt/sqoop-1.4.5/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /opt/sqoop-1.4.5/bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
16/02/12 08:04:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
16/02/12 08:04:00 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/02/12 08:04:00 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
16/02/12 08:04:00 INFO tool.CodeGenTool: Beginning code generation
16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop
Note: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/02/12 08:04:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.jar
16/02/12 08:04:07 INFO mapreduce.ExportJobBase: Beginning export of employee
16/02/12 08:04:08 INFO mapreduce.ExportJobBase: Configuring HCatalog for export job
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Configuring HCatalog specific details for job
16/02/12 08:04:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column names projected : [id, name, deg, salary, dept]
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column name - info map :
    id : [Type : 4,Precision : 11,Scale : 0]
    name : [Type : 12,Precision : 20,Scale : 0]
    deg : [Type : 12,Precision : 20,Scale : 0]
    salary : [Type : 4,Precision : 11,Scale : 0]
    dept : [Type : 12,Precision : 10,Scale : 0]

16/02/12 08:04:10 INFO hive.metastore: Trying to connect to metastore with URI thrift://hive-hdpnightly271-ci-91.test.altiscale.com:9083
16/02/12 08:04:10 INFO hive.metastore: Connected to metastore.
16/02/12 08:04:11 INFO hcat.SqoopHCatUtilities: HCatalog full table schema fields = [id, name, deg, dept, salary]
16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException running export job: java.io.IOException: The table provided default.emp_details1 uses unsupported  partitioning key type  for column salary : int.  Only string fields are allowed in partition columns in Catalog


The issue has discussed in [HIVE-2702] and suppose to be solved in hive 0.12 version. But I am hitting this issue in Hive 1.2.0 version. 


> Sqoop Hcat Int partition Error
> ------------------------------
>
>                 Key: SQOOP-2840
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2840
>             Project: Sqoop
>          Issue Type: Bug
>          Components: metastore
>    Affects Versions: 1.4.5
>            Reporter: Biswajit Nayak
>
> Hi All, 
> I am trying to do a SQOOP export from hive( integer type partition) to mysql and fail with the following error. 
> Versions:-
> {code}
> Hadoop :-  2.7.1
> Hive      :-  1.2.0
> Sqoop   :-  1.4.5
> {code}
> Table in Hive :-
> hive> use default;
> OK
> Time taken: 0.028 seconds
> hive> describe emp_details1;
> OK
> id                      int                                         
> name                    string                                      
> deg                     string                                      
> dept                    string                                      
> salary                  int                                         
> # Partition Information      
> # col_name              data_type               comment             
> salary                  int                                         
> Time taken: 0.125 seconds, Fetched: 10 row(s)
> hive> 
> hive> select * from emp_details1;
> OK
> 1201    gopal           50000
> 1202    manisha         50000
> 1203    kalil           50000
> 1204    prasanth        50000
> 1205    kranthi         50000
> 1206    satish          50000
> Time taken: 0.195 seconds, Fetched: 6 row(s)
> hive> 
> Conf added to Hive megastore site.xml
> [alti-test-01@hdpnightly271-ci-91-services ~]$ grep -A5 -B2 -i "hive.metastore.integral.jdo.pushdown" /etc/hive-metastore/hive-site.xml 
>     </property>
>     <property>
>         <name>hive.metastore.integral.jdo.pushdown</name>
>         <value>TRUE</value>
>     </property>
> </configuration>
> [alti-test-01@hdpnightly271-ci-91-services ~]$ 
> The issue remains same
> [alti-test-01@hdpnightly271-ci-91-services ~]$ /opt/sqoop-1.4.5/bin/sqoop export --connect jdbc:mysql://localhost:3306/test --username hive --password ********* --table employee --hcatalog-database default --hcatalog-table emp_details1
> Warning: /opt/sqoop-1.4.5/bin/../../hbase does not exist! HBase imports will fail.
> Please set $HBASE_HOME to the root of your HBase installation.
> Warning: /opt/sqoop-1.4.5/bin/../../accumulo does not exist! Accumulo imports will fail.
> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
> Warning: /opt/sqoop-1.4.5/bin/../../zookeeper does not exist! Accumulo imports will fail.
> Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
> 16/02/12 08:04:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
> 16/02/12 08:04:00 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
> 16/02/12 08:04:00 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
> 16/02/12 08:04:00 INFO tool.CodeGenTool: Beginning code generation
> 16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
> 16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
> 16/02/12 08:04:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop
> Note: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.java uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> 16/02/12 08:04:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.jar
> 16/02/12 08:04:07 INFO mapreduce.ExportJobBase: Beginning export of employee
> 16/02/12 08:04:08 INFO mapreduce.ExportJobBase: Configuring HCatalog for export job
> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Configuring HCatalog specific details for job
> 16/02/12 08:04:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column names projected : [id, name, deg, salary, dept]
> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column name - info map :
>     id : [Type : 4,Precision : 11,Scale : 0]
>     name : [Type : 12,Precision : 20,Scale : 0]
>     deg : [Type : 12,Precision : 20,Scale : 0]
>     salary : [Type : 4,Precision : 11,Scale : 0]
>     dept : [Type : 12,Precision : 10,Scale : 0]
> 16/02/12 08:04:10 INFO hive.metastore: Trying to connect to metastore with URI thrift://hive-hdpnightly271-ci-91.test.altiscale.com:9083
> 16/02/12 08:04:10 INFO hive.metastore: Connected to metastore.
> 16/02/12 08:04:11 INFO hcat.SqoopHCatUtilities: HCatalog full table schema fields = [id, name, deg, dept, salary]
> 16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException running export job: java.io.IOException: The table provided default.emp_details1 uses unsupported  partitioning key type  for column salary : int.  Only string fields are allowed in partition columns in Catalog
> The issue has discussed in [HIVE-2702] and suppose to be solved in hive 0.12 version. But I am hitting this issue in Hive 1.2.0 version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)