You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Mich Talebzadeh <mi...@gmail.com> on 2016/10/23 14:39:54 UTC

Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Hi,

My stack

Hbase: hbase-1.2.3
Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin


As a suggestion I tried to load an Hbase file via
org.apache.phoenix.mapreduce.CsvBulkLoadTool

So

I created a dummy table in Hbase as below

create 'dummy', 'price_info'

Then in Phoenix I created a table on Hbase table


create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price" VARCHAR);

And then used the following comman to load the csv file

 HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
org.apache.phoenix.mapreduce.CsvBulkLoadTool
--table dummy --input /data/prices/2016-10-23/prices.1477228923115

However, it does not seem to find the table dummy !

2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing metrics
system: phoenix
2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded properties
from hadoop-metrics2.properties
2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).
2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
metrics system started
Exception in thread "main" java.lang.IllegalArgumentException: Table DUMMY
not found
        at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(Schema
Util.java:873)
        at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImpor
tColumns(AbstractBulkLoadTool.java:377)
        at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(A
bstractBulkLoadTool.java:214)
        at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(Abstra
ctBulkLoadTool.java:183)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoa
dTool.java:101)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
ssorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
thodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

I tried putting it inside "" etc but no joy I am afraid!

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Posted by Ted Yu <yu...@gmail.com>.
Looks like user experience can be improved (by enriching exception message)
if table abc can be found but table ABC cannot be found.

Cheers

On Sun, Oct 23, 2016 at 9:26 AM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Thanks gents
>
> I dropped and recreated the table name and columns in UPPERCASE as follows:
>
> create table DUMMY (PK VARCHAR PRIMARY KEY, PRICE_INFO.TICKER VARCHAR,
> PRICE_INFO.TIMECREATED VARCHAR, PRICE_INFO.PRICE VARCHAR);
>
> and used this command as below passing table name in UPPERCASE as well
>
> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
> hadoop jar /usr/lib/hbase/lib/phoenix-4.8.1-HBase-1.2-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table DUMMY --input
> /data/prices/2016-10-23/prices.1477228923115
>
> and this worked!
>
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Incremental load complete for table=DUMMY
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Removing output directory /tmp/261410fb-14d5-49fc-a717-dd0469db1673
>
> It will be helpful if documentation is updated to refelect this.
>
> So bottom line I create Phoenix tables and columns on Hbase tables
> UPPERCASE regardless of case of underlying Hbase table?
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 23 October 2016 at 17:10, anil gupta <an...@gmail.com> wrote:
>
>> Hi Mich,
>>
>> Its recommended to use upper case for table and column name so that you
>> dont to explicitly quote table and column names.
>>
>> ~Anil
>>
>>
>>
>> On Sun, Oct 23, 2016 at 9:07 AM, Ravi Kiran <ma...@gmail.com>
>> wrote:
>>
>>> Sorry, I meant to say table names are case sensitive.
>>>
>>> On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <ma...@gmail.com>
>>> wrote:
>>>
>>>> Hi Mich,
>>>>    Apparently, the tables are case sensitive. Since you have enclosed a
>>>> double quote when creating the table, please pass the same when running the
>>>> bulk load job.
>>>>
>>>> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table "dummy" --input
>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>
>>>> Regards
>>>>
>>>>
>>>> On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com> wrote:
>>>>
>>>>> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct
>>>>> jar file?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>> On 23 October 2016 at 15:39, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> My stack
>>>>>>
>>>>>> Hbase: hbase-1.2.3
>>>>>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>>>>>
>>>>>>
>>>>>> As a suggestion I tried to load an Hbase file via
>>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>>>>>
>>>>>> So
>>>>>>
>>>>>> I created a dummy table in Hbase as below
>>>>>>
>>>>>> create 'dummy', 'price_info'
>>>>>>
>>>>>> Then in Phoenix I created a table on Hbase table
>>>>>>
>>>>>>
>>>>>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>>>>>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>>>>>> VARCHAR);
>>>>>>
>>>>>> And then used the following comman to load the csv file
>>>>>>
>>>>>>  HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>>>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>>>
>>>>>> However, it does not seem to find the table dummy !
>>>>>>
>>>>>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>>>>>> metrics system: phoenix
>>>>>> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded
>>>>>> properties from hadoop-metrics2.properties
>>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl:
>>>>>> Scheduled snapshot period at 10 second(s).
>>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
>>>>>> metrics system started
>>>>>> Exception in thread "main" java.lang.IllegalArgumentException: Table
>>>>>> DUMMY not found
>>>>>>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(
>>>>>> SchemaUtil.java:873)
>>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.
>>>>>> buildImportColumns(AbstractBulkLoadTool.java:377)
>>>>>>         at org.apache.phoenix.mapreduce.A
>>>>>> bstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:214)
>>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(
>>>>>> AbstractBulkLoadTool.java:183)
>>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>>>>>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(
>>>>>> CsvBulkLoadTool.java:101)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>>> Method)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(
>>>>>> NativeMethodAccessorImpl.java:62)
>>>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>>>> DelegatingMethodAccessorImpl.java:43)
>>>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>>>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>>>>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>>>>>
>>>>>> I tried putting it inside "" etc but no joy I am afraid!
>>>>>>
>>>>>> Dr Mich Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://talebzadehmich.wordpress.com
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Posted by anil gupta <an...@gmail.com>.
I am not sure why you are creating underlying HBase tables. If you create
table in Phoenix, it will create HBase table automatically with correct
name.
If you are using Phoenix, i would recommend you to let Phoenix handle the
interaction with HBase.

On Sun, Oct 23, 2016 at 9:26 AM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Thanks gents
>
> I dropped and recreated the table name and columns in UPPERCASE as follows:
>
> create table DUMMY (PK VARCHAR PRIMARY KEY, PRICE_INFO.TICKER VARCHAR,
> PRICE_INFO.TIMECREATED VARCHAR, PRICE_INFO.PRICE VARCHAR);
>
> and used this command as below passing table name in UPPERCASE as well
>
> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
> hadoop jar /usr/lib/hbase/lib/phoenix-4.8.1-HBase-1.2-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table DUMMY --input
> /data/prices/2016-10-23/prices.1477228923115
>
> and this worked!
>
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Incremental load complete for table=DUMMY
> 2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
> Removing output directory /tmp/261410fb-14d5-49fc-a717-dd0469db1673
>
> It will be helpful if documentation is updated to refelect this.
>
> So bottom line I create Phoenix tables and columns on Hbase tables
> UPPERCASE regardless of case of underlying Hbase table?
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 23 October 2016 at 17:10, anil gupta <an...@gmail.com> wrote:
>
>> Hi Mich,
>>
>> Its recommended to use upper case for table and column name so that you
>> dont to explicitly quote table and column names.
>>
>> ~Anil
>>
>>
>>
>> On Sun, Oct 23, 2016 at 9:07 AM, Ravi Kiran <ma...@gmail.com>
>> wrote:
>>
>>> Sorry, I meant to say table names are case sensitive.
>>>
>>> On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <ma...@gmail.com>
>>> wrote:
>>>
>>>> Hi Mich,
>>>>    Apparently, the tables are case sensitive. Since you have enclosed a
>>>> double quote when creating the table, please pass the same when running the
>>>> bulk load job.
>>>>
>>>> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table "dummy" --input
>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>
>>>> Regards
>>>>
>>>>
>>>> On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com> wrote:
>>>>
>>>>> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct
>>>>> jar file?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>> On 23 October 2016 at 15:39, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> My stack
>>>>>>
>>>>>> Hbase: hbase-1.2.3
>>>>>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>>>>>
>>>>>>
>>>>>> As a suggestion I tried to load an Hbase file via
>>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>>>>>
>>>>>> So
>>>>>>
>>>>>> I created a dummy table in Hbase as below
>>>>>>
>>>>>> create 'dummy', 'price_info'
>>>>>>
>>>>>> Then in Phoenix I created a table on Hbase table
>>>>>>
>>>>>>
>>>>>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>>>>>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>>>>>> VARCHAR);
>>>>>>
>>>>>> And then used the following comman to load the csv file
>>>>>>
>>>>>>  HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>>>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>>>
>>>>>> However, it does not seem to find the table dummy !
>>>>>>
>>>>>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>>>>>> metrics system: phoenix
>>>>>> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded
>>>>>> properties from hadoop-metrics2.properties
>>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl:
>>>>>> Scheduled snapshot period at 10 second(s).
>>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
>>>>>> metrics system started
>>>>>> Exception in thread "main" java.lang.IllegalArgumentException: Table
>>>>>> DUMMY not found
>>>>>>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(
>>>>>> SchemaUtil.java:873)
>>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.
>>>>>> buildImportColumns(AbstractBulkLoadTool.java:377)
>>>>>>         at org.apache.phoenix.mapreduce.A
>>>>>> bstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:214)
>>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(
>>>>>> AbstractBulkLoadTool.java:183)
>>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>>>>>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(
>>>>>> CsvBulkLoadTool.java:101)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>>> Method)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(
>>>>>> NativeMethodAccessorImpl.java:62)
>>>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>>>> DelegatingMethodAccessorImpl.java:43)
>>>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>>>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>>>>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>>>>>
>>>>>> I tried putting it inside "" etc but no joy I am afraid!
>>>>>>
>>>>>> Dr Mich Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://talebzadehmich.wordpress.com
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>>> for any loss, damage or destruction of data or any other property which may
>>>>>> arise from relying on this email's technical content is explicitly
>>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>>> arising from such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks gents

I dropped and recreated the table name and columns in UPPERCASE as follows:

create table DUMMY (PK VARCHAR PRIMARY KEY, PRICE_INFO.TICKER VARCHAR,
PRICE_INFO.TIMECREATED VARCHAR, PRICE_INFO.PRICE VARCHAR);

and used this command as below passing table name in UPPERCASE as well

HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
hadoop jar /usr/lib/hbase/lib/phoenix-4.8.1-HBase-1.2-client.jar
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table DUMMY --input
/data/prices/2016-10-23/prices.1477228923115

and this worked!

2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
Incremental load complete for table=DUMMY
2016-10-23 17:20:33,089 INFO  [main] mapreduce.AbstractBulkLoadTool:
Removing output directory /tmp/261410fb-14d5-49fc-a717-dd0469db1673

It will be helpful if documentation is updated to refelect this.

So bottom line I create Phoenix tables and columns on Hbase tables
UPPERCASE regardless of case of underlying Hbase table?

Thanks


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 23 October 2016 at 17:10, anil gupta <an...@gmail.com> wrote:

> Hi Mich,
>
> Its recommended to use upper case for table and column name so that you
> dont to explicitly quote table and column names.
>
> ~Anil
>
>
>
> On Sun, Oct 23, 2016 at 9:07 AM, Ravi Kiran <ma...@gmail.com>
> wrote:
>
>> Sorry, I meant to say table names are case sensitive.
>>
>> On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <ma...@gmail.com>
>> wrote:
>>
>>> Hi Mich,
>>>    Apparently, the tables are case sensitive. Since you have enclosed a
>>> double quote when creating the table, please pass the same when running the
>>> bulk load job.
>>>
>>> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table "dummy" --input
>>> /data/prices/2016-10-23/prices.1477228923115
>>>
>>> Regards
>>>
>>>
>>> On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com> wrote:
>>>
>>>> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct jar
>>>> file?
>>>>
>>>> Thanks
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>> On 23 October 2016 at 15:39, Mich Talebzadeh <mich.talebzadeh@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> My stack
>>>>>
>>>>> Hbase: hbase-1.2.3
>>>>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>>>>
>>>>>
>>>>> As a suggestion I tried to load an Hbase file via
>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>>>>
>>>>> So
>>>>>
>>>>> I created a dummy table in Hbase as below
>>>>>
>>>>> create 'dummy', 'price_info'
>>>>>
>>>>> Then in Phoenix I created a table on Hbase table
>>>>>
>>>>>
>>>>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>>>>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>>>>> VARCHAR);
>>>>>
>>>>> And then used the following comman to load the csv file
>>>>>
>>>>>  HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>>
>>>>> However, it does not seem to find the table dummy !
>>>>>
>>>>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>>>>> metrics system: phoenix
>>>>> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded
>>>>> properties from hadoop-metrics2.properties
>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: Scheduled
>>>>> snapshot period at 10 second(s).
>>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
>>>>> metrics system started
>>>>> Exception in thread "main" java.lang.IllegalArgumentException: Table
>>>>> DUMMY not found
>>>>>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(
>>>>> SchemaUtil.java:873)
>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.
>>>>> buildImportColumns(AbstractBulkLoadTool.java:377)
>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(
>>>>> AbstractBulkLoadTool.java:214)
>>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(
>>>>> AbstractBulkLoadTool.java:183)
>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>>>>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(
>>>>> CsvBulkLoadTool.java:101)
>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(
>>>>> NativeMethodAccessorImpl.java:62)
>>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>>> DelegatingMethodAccessorImpl.java:43)
>>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>>>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>>>>
>>>>> I tried putting it inside "" etc but no joy I am afraid!
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Posted by anil gupta <an...@gmail.com>.
Hi Mich,

Its recommended to use upper case for table and column name so that you
dont to explicitly quote table and column names.

~Anil



On Sun, Oct 23, 2016 at 9:07 AM, Ravi Kiran <ma...@gmail.com>
wrote:

> Sorry, I meant to say table names are case sensitive.
>
> On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <ma...@gmail.com>
> wrote:
>
>> Hi Mich,
>>    Apparently, the tables are case sensitive. Since you have enclosed a
>> double quote when creating the table, please pass the same when running the
>> bulk load job.
>>
>> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table "dummy" --input
>> /data/prices/2016-10-23/prices.1477228923115
>>
>> Regards
>>
>>
>> On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct jar
>>> file?
>>>
>>> Thanks
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 23 October 2016 at 15:39, Mich Talebzadeh <mi...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> My stack
>>>>
>>>> Hbase: hbase-1.2.3
>>>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>>>
>>>>
>>>> As a suggestion I tried to load an Hbase file via
>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>>>
>>>> So
>>>>
>>>> I created a dummy table in Hbase as below
>>>>
>>>> create 'dummy', 'price_info'
>>>>
>>>> Then in Phoenix I created a table on Hbase table
>>>>
>>>>
>>>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>>>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>>>> VARCHAR);
>>>>
>>>> And then used the following comman to load the csv file
>>>>
>>>>  HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>>>> /data/prices/2016-10-23/prices.1477228923115
>>>>
>>>> However, it does not seem to find the table dummy !
>>>>
>>>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>>>> metrics system: phoenix
>>>> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded
>>>> properties from hadoop-metrics2.properties
>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: Scheduled
>>>> snapshot period at 10 second(s).
>>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
>>>> metrics system started
>>>> Exception in thread "main" java.lang.IllegalArgumentException: Table
>>>> DUMMY not found
>>>>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(Schema
>>>> Util.java:873)
>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImpor
>>>> tColumns(AbstractBulkLoadTool.java:377)
>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(A
>>>> bstractBulkLoadTool.java:214)
>>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(Abstra
>>>> ctBulkLoadTool.java:183)
>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>>>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoa
>>>> dTool.java:101)
>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:62)
>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>>> thodAccessorImpl.java:43)
>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>>>
>>>> I tried putting it inside "" etc but no joy I am afraid!
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>
>>>
>>
>


-- 
Thanks & Regards,
Anil Gupta

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Posted by Ravi Kiran <ma...@gmail.com>.
Sorry, I meant to say table names are case sensitive.

On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <ma...@gmail.com>
wrote:

> Hi Mich,
>    Apparently, the tables are case sensitive. Since you have enclosed a
> double quote when creating the table, please pass the same when running the
> bulk load job.
>
> HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table "dummy" --input
> /data/prices/2016-10-23/prices.1477228923115
>
> Regards
>
>
> On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <
> mich.talebzadeh@gmail.com> wrote:
>
>> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct jar
>> file?
>>
>> Thanks
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 23 October 2016 at 15:39, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> My stack
>>>
>>> Hbase: hbase-1.2.3
>>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>>
>>>
>>> As a suggestion I tried to load an Hbase file via
>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>>
>>> So
>>>
>>> I created a dummy table in Hbase as below
>>>
>>> create 'dummy', 'price_info'
>>>
>>> Then in Phoenix I created a table on Hbase table
>>>
>>>
>>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>>> VARCHAR);
>>>
>>> And then used the following comman to load the csv file
>>>
>>>  HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>>> /data/prices/2016-10-23/prices.1477228923115
>>>
>>> However, it does not seem to find the table dummy !
>>>
>>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>>> metrics system: phoenix
>>> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded
>>> properties from hadoop-metrics2.properties
>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: Scheduled
>>> snapshot period at 10 second(s).
>>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
>>> metrics system started
>>> Exception in thread "main" java.lang.IllegalArgumentException: Table
>>> DUMMY not found
>>>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(Schema
>>> Util.java:873)
>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImpor
>>> tColumns(AbstractBulkLoadTool.java:377)
>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(A
>>> bstractBulkLoadTool.java:214)
>>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(Abstra
>>> ctBulkLoadTool.java:183)
>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoa
>>> dTool.java:101)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:62)
>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>>
>>> I tried putting it inside "" etc but no joy I am afraid!
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>
>>
>

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Posted by Ravi Kiran <ma...@gmail.com>.
Hi Mich,
   Apparently, the tables are case sensitive. Since you have enclosed a
double quote when creating the table, please pass the same when running the
bulk load job.

HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
hadoop jar phoenix-4.8.1-HBase-1.2-client.jar org.apache.phoenix.mapreduce.C
svBulkLoadTool --table "dummy" --input /data/prices/2016-10-23/prices
.1477228923115

Regards


On Sun, Oct 23, 2016 at 8:39 AM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct jar
> file?
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 23 October 2016 at 15:39, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Hi,
>>
>> My stack
>>
>> Hbase: hbase-1.2.3
>> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>>
>>
>> As a suggestion I tried to load an Hbase file via
>> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>>
>> So
>>
>> I created a dummy table in Hbase as below
>>
>> create 'dummy', 'price_info'
>>
>> Then in Phoenix I created a table on Hbase table
>>
>>
>> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
>> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price"
>> VARCHAR);
>>
>> And then used the following comman to load the csv file
>>
>>  HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
>> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
>> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
>> /data/prices/2016-10-23/prices.1477228923115
>>
>> However, it does not seem to find the table dummy !
>>
>> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing
>> metrics system: phoenix
>> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded
>> properties from hadoop-metrics2.properties
>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: Scheduled
>> snapshot period at 10 second(s).
>> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
>> metrics system started
>> Exception in thread "main" java.lang.IllegalArgumentException: Table
>> DUMMY not found
>>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(Schema
>> Util.java:873)
>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImpor
>> tColumns(AbstractBulkLoadTool.java:377)
>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(A
>> bstractBulkLoadTool.java:214)
>>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(Abstra
>> ctBulkLoadTool.java:183)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoa
>> dTool.java:101)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>
>> I tried putting it inside "" etc but no joy I am afraid!
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>
>

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

Posted by Mich Talebzadeh <mi...@gmail.com>.
Not sure whether phoenix-4.8.1-HBase-1.2-client.jar is the correct jar file?

Thanks

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 23 October 2016 at 15:39, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Hi,
>
> My stack
>
> Hbase: hbase-1.2.3
> Phoenix: apache-phoenix-4.8.1-HBase-1.2-bin
>
>
> As a suggestion I tried to load an Hbase file via
> org.apache.phoenix.mapreduce.CsvBulkLoadTool
>
> So
>
> I created a dummy table in Hbase as below
>
> create 'dummy', 'price_info'
>
> Then in Phoenix I created a table on Hbase table
>
>
> create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
> VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price" VARCHAR);
>
> And then used the following comman to load the csv file
>
>  HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf
> hadoop jar phoenix-4.8.1-HBase-1.2-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table dummy --input
> /data/prices/2016-10-23/prices.1477228923115
>
> However, it does not seem to find the table dummy !
>
> 2016-10-23 14:38:39,442 INFO  [main] metrics.Metrics: Initializing metrics
> system: phoenix
> 2016-10-23 14:38:39,479 INFO  [main] impl.MetricsConfig: loaded properties
> from hadoop-metrics2.properties
> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: Scheduled
> snapshot period at 10 second(s).
> 2016-10-23 14:38:39,529 INFO  [main] impl.MetricsSystemImpl: phoenix
> metrics system started
> Exception in thread "main" java.lang.IllegalArgumentException: Table
> DUMMY not found
>         at org.apache.phoenix.util.SchemaUtil.generateColumnInfo(Schema
> Util.java:873)
>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImpor
> tColumns(AbstractBulkLoadTool.java:377)
>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(A
> bstractBulkLoadTool.java:214)
>         at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(Abstra
> ctBulkLoadTool.java:183)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>         at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoa
> dTool.java:101)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> ssorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> I tried putting it inside "" etc but no joy I am afraid!
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>