You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by Oleksiy S <os...@gmail.com> on 2018/04/20 07:54:07 UTC

Does Hive support Hbase-synced partitioned tables?

Hi all.

I can create following table

create table hbase_partitioned(doc_id STRING, EmployeeID Int, FirstName
String, Designation  String, Salary Int) PARTITIONED BY (Department String)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES ("hbase.columns.mapping" =
":key,boolsCF:EmployeeID,intsCF:FirstName,intsCF:Designation,intsCF:Salary")
TBLPROPERTIES("hbase.table.name" = "hbase_partitioned");


But when I want to insert data, I have an exception. Is it expected
behavior?

INSERT INTO TABLE hbase_partitioned PARTITION(department='A') values
('1', 1, 'John Connor', 'New York', 2300),
('2', 2, 'Max Plank', 'Las Vegas', 1300),
('3', 3, 'Arni Shwarz', 'Los Angelos', 7700),
('4', 4, 'Sarah Connor', 'Oakland', 9700);



WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the
future versions. Consider using a different execution engine (i.e. spark,
tez) or using Hive 1.X releases.
Query ID = mapr_20180420074356_b13d8652-1ff6-4fe1-975c-7318db6037de
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.IllegalArgumentException: Must specify table name
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1136)
at
org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67)
at
org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:271)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:434)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:138)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2074)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1745)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1454)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1162)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:238)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:186)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:405)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:791)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:729)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:652)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:647)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.IllegalArgumentException: Must specify table name
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1158)
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1133)
... 38 more
Caused by: java.lang.IllegalArgumentException: Must specify table name
at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:191)
at
org.apache.hive.common.util.ReflectionUtil.setConf(ReflectionUtil.java:101)
at
org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:87)
at
org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:314)
at
org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:292)
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1156)
... 39 more
Job Submission failed with exception
'java.io.IOException(org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.IllegalArgumentException: Must specify table name)'
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.
org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.IllegalArgumentException: Must specify table name


-- 
Oleksiy

Re: Does Hive support Hbase-synced partitioned tables?

Posted by Oleksiy S <os...@gmail.com>.
Thanks for answer Furcy.

On Sun, Apr 22, 2018 at 8:59 PM, Furcy Pin <pi...@gmail.com> wrote:

> Hi Oleksiy,
>
> I must say that I don't know if partitioned HBase-backed tables are
> supported in Hive, but I don't understand why you would need it. What are
> you trying to do exactly? I suspect that you could do it by using composite
> keys (Department, doc_id).
>
>
> Also, I would advise against using multiple column families for the
> example you are describing. I don't think it would lead to better
> performances.
>
> Hope this helps,
>
> Furcy
>
>
> On Sun, 22 Apr 2018, 14:06 Oleksiy S, <os...@gmail.com>
> wrote:
>
>> Any updates?
>>
>> On Fri, Apr 20, 2018 at 10:54 AM, Oleksiy S <
>> osayankin.superuser@gmail.com> wrote:
>>
>>> Hi all.
>>>
>>> I can create following table
>>>
>>> create table hbase_partitioned(doc_id STRING, EmployeeID Int, FirstName
>>> String, Designation  String, Salary Int) PARTITIONED BY (Department String)
>>> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
>>> SERDEPROPERTIES ("hbase.columns.mapping" = ":key,boolsCF:EmployeeID,
>>> intsCF:FirstName,intsCF:Designation,intsCF:Salary") TBLPROPERTIES("
>>> hbase.table.name" = "hbase_partitioned");
>>>
>>>
>>> But when I want to insert data, I have an exception. Is it expected
>>> behavior?
>>>
>>> INSERT INTO TABLE hbase_partitioned PARTITION(department='A') values
>>> ('1', 1, 'John Connor', 'New York', 2300),
>>> ('2', 2, 'Max Plank', 'Las Vegas', 1300),
>>> ('3', 3, 'Arni Shwarz', 'Los Angelos', 7700),
>>> ('4', 4, 'Sarah Connor', 'Oakland', 9700);
>>>
>>>
>>>
>>> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in
>>> the future versions. Consider using a different execution engine (i.e.
>>> spark, tez) or using Hive 1.X releases.
>>> Query ID = mapr_20180420074356_b13d8652-1ff6-4fe1-975c-7318db6037de
>>> Total jobs = 3
>>> Launching Job 1 out of 3
>>> Number of reduce tasks is set to 0 since there's no reduce operator
>>> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException:
>>> java.lang.IllegalArgumentException: Must specify table name
>>> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(
>>> FileSinkOperator.java:1136)
>>> at org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(
>>> HiveOutputFormatImpl.java:67)
>>> at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(
>>> JobSubmitter.java:271)
>>> at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
>>> JobSubmitter.java:142)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>> at org.apache.hadoop.security.UserGroupInformation.doAs(
>>> UserGroupInformation.java:1595)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
>>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>> at org.apache.hadoop.security.UserGroupInformation.doAs(
>>> UserGroupInformation.java:1595)
>>> at org.apache.hadoop.mapred.JobClient.submitJobInternal(
>>> JobClient.java:570)
>>> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
>>> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(
>>> ExecDriver.java:434)
>>> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(
>>> MapRedTask.java:138)
>>> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
>>> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(
>>> TaskRunner.java:100)
>>> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2074)
>>> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1745)
>>> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1454)
>>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172)
>>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1162)
>>> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(
>>> CliDriver.java:238)
>>> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:186)
>>> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:405)
>>> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(
>>> CliDriver.java:791)
>>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:729)
>>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:652)
>>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:647)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(
>>> NativeMethodAccessorImpl.java:62)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>> DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
>>> Must specify table name
>>> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.
>>> createHiveOutputFormat(FileSinkOperator.java:1158)
>>> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(
>>> FileSinkOperator.java:1133)
>>> ... 38 more
>>> Caused by: java.lang.IllegalArgumentException: Must specify table name
>>> at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.
>>> setConf(TableOutputFormat.java:191)
>>> at org.apache.hive.common.util.ReflectionUtil.setConf(
>>> ReflectionUtil.java:101)
>>> at org.apache.hive.common.util.ReflectionUtil.newInstance(
>>> ReflectionUtil.java:87)
>>> at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(
>>> HiveFileFormatUtils.java:314)
>>> at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(
>>> HiveFileFormatUtils.java:292)
>>> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.
>>> createHiveOutputFormat(FileSinkOperator.java:1156)
>>> ... 39 more
>>> Job Submission failed with exception 'java.io.IOException(org.
>>> apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
>>> Must specify table name)'
>>> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask.
>>> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
>>> Must specify table name
>>>
>>>
>>> --
>>> Oleksiy
>>>
>>
>>
>>
>> --
>> Oleksiy
>>
>


-- 
Oleksiy

Re: Does Hive support Hbase-synced partitioned tables?

Posted by Furcy Pin <pi...@gmail.com>.
Hi Oleksiy,

I must say that I don't know if partitioned HBase-backed tables are
supported in Hive, but I don't understand why you would need it. What are
you trying to do exactly? I suspect that you could do it by using composite
keys (Department, doc_id).


Also, I would advise against using multiple column families for the example
you are describing. I don't think it would lead to better performances.

Hope this helps,

Furcy


On Sun, 22 Apr 2018, 14:06 Oleksiy S, <os...@gmail.com> wrote:

> Any updates?
>
> On Fri, Apr 20, 2018 at 10:54 AM, Oleksiy S <osayankin.superuser@gmail.com
> > wrote:
>
>> Hi all.
>>
>> I can create following table
>>
>> create table hbase_partitioned(doc_id STRING, EmployeeID Int, FirstName
>> String, Designation  String, Salary Int) PARTITIONED BY (Department String)
>> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
>> SERDEPROPERTIES ("hbase.columns.mapping" =
>> ":key,boolsCF:EmployeeID,intsCF:FirstName,intsCF:Designation,intsCF:Salary")
>> TBLPROPERTIES("hbase.table.name" = "hbase_partitioned");
>>
>>
>> But when I want to insert data, I have an exception. Is it expected
>> behavior?
>>
>> INSERT INTO TABLE hbase_partitioned PARTITION(department='A') values
>> ('1', 1, 'John Connor', 'New York', 2300),
>> ('2', 2, 'Max Plank', 'Las Vegas', 1300),
>> ('3', 3, 'Arni Shwarz', 'Los Angelos', 7700),
>> ('4', 4, 'Sarah Connor', 'Oakland', 9700);
>>
>>
>>
>> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in
>> the future versions. Consider using a different execution engine (i.e.
>> spark, tez) or using Hive 1.X releases.
>> Query ID = mapr_20180420074356_b13d8652-1ff6-4fe1-975c-7318db6037de
>> Total jobs = 3
>> Launching Job 1 out of 3
>> Number of reduce tasks is set to 0 since there's no reduce operator
>> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1136)
>> at org.apache.hadoop.hive.ql.io
>> .HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:271)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
>> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
>> at
>> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:434)
>> at
>> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:138)
>> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
>> at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2074)
>> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1745)
>> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1454)
>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172)
>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1162)
>> at
>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:238)
>> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:186)
>> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:405)
>> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:791)
>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:729)
>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:652)
>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:647)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1158)
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1133)
>> ... 38 more
>> Caused by: java.lang.IllegalArgumentException: Must specify table name
>> at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:191)
>> at
>> org.apache.hive.common.util.ReflectionUtil.setConf(ReflectionUtil.java:101)
>> at
>> org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:87)
>> at org.apache.hadoop.hive.ql.io
>> .HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:314)
>> at org.apache.hadoop.hive.ql.io
>> .HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:292)
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1156)
>> ... 39 more
>> Job Submission failed with exception
>> 'java.io.IOException(org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name)'
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.
>> org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name
>>
>>
>> --
>> Oleksiy
>>
>
>
>
> --
> Oleksiy
>

Re: Does Hive support Hbase-synced partitioned tables?

Posted by Furcy Pin <pi...@gmail.com>.
Hi Oleksiy,

I must say that I don't know if partitioned HBase-backed tables are
supported in Hive, but I don't understand why you would need it. What are
you trying to do exactly? I suspect that you could do it by using composite
keys (Department, doc_id).


Also, I would advise against using multiple column families for the example
you are describing. I don't think it would lead to better performances.

Hope this helps,

Furcy


On Sun, 22 Apr 2018, 14:06 Oleksiy S, <os...@gmail.com> wrote:

> Any updates?
>
> On Fri, Apr 20, 2018 at 10:54 AM, Oleksiy S <osayankin.superuser@gmail.com
> > wrote:
>
>> Hi all.
>>
>> I can create following table
>>
>> create table hbase_partitioned(doc_id STRING, EmployeeID Int, FirstName
>> String, Designation  String, Salary Int) PARTITIONED BY (Department String)
>> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
>> SERDEPROPERTIES ("hbase.columns.mapping" =
>> ":key,boolsCF:EmployeeID,intsCF:FirstName,intsCF:Designation,intsCF:Salary")
>> TBLPROPERTIES("hbase.table.name" = "hbase_partitioned");
>>
>>
>> But when I want to insert data, I have an exception. Is it expected
>> behavior?
>>
>> INSERT INTO TABLE hbase_partitioned PARTITION(department='A') values
>> ('1', 1, 'John Connor', 'New York', 2300),
>> ('2', 2, 'Max Plank', 'Las Vegas', 1300),
>> ('3', 3, 'Arni Shwarz', 'Los Angelos', 7700),
>> ('4', 4, 'Sarah Connor', 'Oakland', 9700);
>>
>>
>>
>> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in
>> the future versions. Consider using a different execution engine (i.e.
>> spark, tez) or using Hive 1.X releases.
>> Query ID = mapr_20180420074356_b13d8652-1ff6-4fe1-975c-7318db6037de
>> Total jobs = 3
>> Launching Job 1 out of 3
>> Number of reduce tasks is set to 0 since there's no reduce operator
>> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1136)
>> at org.apache.hadoop.hive.ql.io
>> .HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:271)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
>> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
>> at
>> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:434)
>> at
>> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:138)
>> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
>> at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2074)
>> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1745)
>> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1454)
>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172)
>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1162)
>> at
>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:238)
>> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:186)
>> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:405)
>> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:791)
>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:729)
>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:652)
>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:647)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1158)
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1133)
>> ... 38 more
>> Caused by: java.lang.IllegalArgumentException: Must specify table name
>> at
>> org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:191)
>> at
>> org.apache.hive.common.util.ReflectionUtil.setConf(ReflectionUtil.java:101)
>> at
>> org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:87)
>> at org.apache.hadoop.hive.ql.io
>> .HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:314)
>> at org.apache.hadoop.hive.ql.io
>> .HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:292)
>> at
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1156)
>> ... 39 more
>> Job Submission failed with exception
>> 'java.io.IOException(org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name)'
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.
>> org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.IllegalArgumentException: Must specify table name
>>
>>
>> --
>> Oleksiy
>>
>
>
>
> --
> Oleksiy
>

Re: Does Hive support Hbase-synced partitioned tables?

Posted by Oleksiy S <os...@gmail.com>.
Any updates?

On Fri, Apr 20, 2018 at 10:54 AM, Oleksiy S <os...@gmail.com>
wrote:

> Hi all.
>
> I can create following table
>
> create table hbase_partitioned(doc_id STRING, EmployeeID Int, FirstName
> String, Designation  String, Salary Int) PARTITIONED BY (Department String)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
> SERDEPROPERTIES ("hbase.columns.mapping" = ":key,boolsCF:EmployeeID,
> intsCF:FirstName,intsCF:Designation,intsCF:Salary") TBLPROPERTIES("
> hbase.table.name" = "hbase_partitioned");
>
>
> But when I want to insert data, I have an exception. Is it expected
> behavior?
>
> INSERT INTO TABLE hbase_partitioned PARTITION(department='A') values
> ('1', 1, 'John Connor', 'New York', 2300),
> ('2', 2, 'Max Plank', 'Las Vegas', 1300),
> ('3', 3, 'Arni Shwarz', 'Los Angelos', 7700),
> ('4', 4, 'Sarah Connor', 'Oakland', 9700);
>
>
>
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in
> the future versions. Consider using a different execution engine (i.e.
> spark, tez) or using Hive 1.X releases.
> Query ID = mapr_20180420074356_b13d8652-1ff6-4fe1-975c-7318db6037de
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException:
> java.lang.IllegalArgumentException: Must specify table name
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(
> FileSinkOperator.java:1136)
> at org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(
> HiveOutputFormatImpl.java:67)
> at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(
> JobSubmitter.java:271)
> at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> JobSubmitter.java:142)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1595)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1595)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(
> JobClient.java:570)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(
> ExecDriver.java:434)
> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(
> MapRedTask.java:138)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(
> TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2074)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1745)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1454)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1162)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(
> CliDriver.java:238)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:186)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:405)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:791)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:729)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:652)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:647)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
> Must specify table name
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(
> FileSinkOperator.java:1158)
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(
> FileSinkOperator.java:1133)
> ... 38 more
> Caused by: java.lang.IllegalArgumentException: Must specify table name
> at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.
> setConf(TableOutputFormat.java:191)
> at org.apache.hive.common.util.ReflectionUtil.setConf(
> ReflectionUtil.java:101)
> at org.apache.hive.common.util.ReflectionUtil.newInstance(
> ReflectionUtil.java:87)
> at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(
> HiveFileFormatUtils.java:314)
> at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(
> HiveFileFormatUtils.java:292)
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(
> FileSinkOperator.java:1156)
> ... 39 more
> Job Submission failed with exception 'java.io.IOException(org.
> apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
> Must specify table name)'
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask.
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
> Must specify table name
>
>
> --
> Oleksiy
>



-- 
Oleksiy

Re: Does Hive support Hbase-synced partitioned tables?

Posted by Oleksiy S <os...@gmail.com>.
Any updates?

On Fri, Apr 20, 2018 at 10:54 AM, Oleksiy S <os...@gmail.com>
wrote:

> Hi all.
>
> I can create following table
>
> create table hbase_partitioned(doc_id STRING, EmployeeID Int, FirstName
> String, Designation  String, Salary Int) PARTITIONED BY (Department String)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
> SERDEPROPERTIES ("hbase.columns.mapping" = ":key,boolsCF:EmployeeID,
> intsCF:FirstName,intsCF:Designation,intsCF:Salary") TBLPROPERTIES("
> hbase.table.name" = "hbase_partitioned");
>
>
> But when I want to insert data, I have an exception. Is it expected
> behavior?
>
> INSERT INTO TABLE hbase_partitioned PARTITION(department='A') values
> ('1', 1, 'John Connor', 'New York', 2300),
> ('2', 2, 'Max Plank', 'Las Vegas', 1300),
> ('3', 3, 'Arni Shwarz', 'Los Angelos', 7700),
> ('4', 4, 'Sarah Connor', 'Oakland', 9700);
>
>
>
> WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in
> the future versions. Consider using a different execution engine (i.e.
> spark, tez) or using Hive 1.X releases.
> Query ID = mapr_20180420074356_b13d8652-1ff6-4fe1-975c-7318db6037de
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks is set to 0 since there's no reduce operator
> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException:
> java.lang.IllegalArgumentException: Must specify table name
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(
> FileSinkOperator.java:1136)
> at org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(
> HiveOutputFormatImpl.java:67)
> at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(
> JobSubmitter.java:271)
> at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> JobSubmitter.java:142)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1595)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1595)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(
> JobClient.java:570)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
> at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(
> ExecDriver.java:434)
> at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(
> MapRedTask.java:138)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(
> TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2074)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1745)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1454)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1172)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1162)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(
> CliDriver.java:238)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:186)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:405)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:791)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:729)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:652)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:647)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
> Must specify table name
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(
> FileSinkOperator.java:1158)
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(
> FileSinkOperator.java:1133)
> ... 38 more
> Caused by: java.lang.IllegalArgumentException: Must specify table name
> at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.
> setConf(TableOutputFormat.java:191)
> at org.apache.hive.common.util.ReflectionUtil.setConf(
> ReflectionUtil.java:101)
> at org.apache.hive.common.util.ReflectionUtil.newInstance(
> ReflectionUtil.java:87)
> at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(
> HiveFileFormatUtils.java:314)
> at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(
> HiveFileFormatUtils.java:292)
> at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(
> FileSinkOperator.java:1156)
> ... 39 more
> Job Submission failed with exception 'java.io.IOException(org.
> apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
> Must specify table name)'
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask.
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException:
> Must specify table name
>
>
> --
> Oleksiy
>



-- 
Oleksiy