You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@sqoop.apache.org by Sowjanya Kakarala <so...@agrible.com> on 2018/07/27 16:40:30 UTC
Sqoop hcatalog-partition-keys for int
Hi everyone,
As per the new command line options in sqoop example for
--hcatalog-partition-keys and --hcatalog-partition-values are showing the
values in integer type but when i try that it throws an error for me as it
only supports string.
https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_new_command_line_options
SQOOP COMMAND:
sqoop import --connect host:5432/db --query "select yr2018[1] as
data,'2018-01-01' as time_stamp from tbl where cell_id=1 and \$CONDITIONS"
--username uname --password 'pwd' --hcatalog-database db --hcatalog-table
tbl --hcatalog-partition-keys cell_id --hcatalog-partition-values 1
--map-column-java time_stamp=String
ERROR:
18/07/27 16:34:12 ERROR tool.ImportTool: Import failed:
java.io.IOException: MetaException(message:Filtering is supported only on
partition keys of type string)
at
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setFilter(HCatInputFormat.java:120)
at
org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:391)
at
org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:850)
at
org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:102)
at
org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:263)
at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:748)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:522)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: MetaException(message:Filtering is supported only on partition
keys of type string)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result.read(ThriftHiveMetastore.java)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions_by_filter(ThriftHiveMetastore.java:2599)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions_by_filter(ThriftHiveMetastore.java:2583)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
at com.sun.proxy.$Proxy5.listPartitionsByFilter(Unknown Source)
at
org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:113)
at
org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:88)
at
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setFilter(HCatInputFormat.java:118)
... 13 more
Re: Sqoop hcatalog-partition-keys for int
Posted by Venkat <ve...@gmail.com>.
HCatalog apis only support partition keys of type string or
char/varchar even though Hive supports all more types in partitioning
keys.
Venkat
On Fri, Jul 27, 2018 at 9:40 AM Sowjanya Kakarala <so...@agrible.com> wrote:
>
> Hi everyone,
>
> As per the new command line options in sqoop example for --hcatalog-partition-keys and --hcatalog-partition-values are showing the values in integer type but when i try that it throws an error for me as it only supports string.
>
> https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_new_command_line_options
>
> SQOOP COMMAND:
>
> sqoop import --connect host:5432/db --query "select yr2018[1] as data,'2018-01-01' as time_stamp from tbl where cell_id=1 and \$CONDITIONS" --username uname --password 'pwd' --hcatalog-database db --hcatalog-table tbl --hcatalog-partition-keys cell_id --hcatalog-partition-values 1 --map-column-java time_stamp=String
>
>
>
> ERROR:
>
> 18/07/27 16:34:12 ERROR tool.ImportTool: Import failed: java.io.IOException: MetaException(message:Filtering is supported only on partition keys of type string)
>
> at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setFilter(HCatInputFormat.java:120)
>
> at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:391)
>
> at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:850)
>
> at org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:102)
>
> at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:263)
>
> at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:748)
>
> at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:522)
>
> at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
>
> at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>
> at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
>
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
>
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
>
> at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
>
> Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
>
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
>
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
>
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result.read(ThriftHiveMetastore.java)
>
> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
>
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions_by_filter(ThriftHiveMetastore.java:2599)
>
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions_by_filter(ThriftHiveMetastore.java:2583)
>
> at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1232)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
>
> at com.sun.proxy.$Proxy5.listPartitionsByFilter(Unknown Source)
>
> at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:113)
>
> at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:88)
>
> at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setFilter(HCatInputFormat.java:118)
>
> ... 13 more
>
>
--
Regards
Venkat