You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Parveen Jain <pa...@live.com> on 2016/10/23 04:48:37 UTC

PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found

While running this query from spark phoenix connector:


select distinct(C_TXN.CUSTMR_ID) from CUS_TXN where (CUS_TXN.TXN_TYPE='xxxx') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>=300 union all select distinct(CUS_TXN.CUSTMR_ID) from CUS_TXN where (CUS_TXN.TXN_TYPE='yyyy') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>100



getting below exception:

Getting some phoenix exception for below query:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found, got: hbase:namespace.
        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(Phoen



my code for fetching records is:

PhoenixConfigurationUtil.setInputTableName(configuration , TABLE_NAME);
PhoenixConfigurationUtil.setOutputTableName(configuration ,TABLE_NAME);
PhoenixConfigurationUtil.setInputQuery(configuration, QueryToRun);
PhoenixConfigurationUtil.setInputClass(configuration, DataRecord.class);

configuration.setClass(JobContext.OUTPUT_FORMAT_CLASS_ATTR,PhoenixOutputFormat.class, OutputFormat.class);

@SuppressWarnings("unchecked")
JavaPairRDD<NullWritable, DataRecord> stocksRDD = jsc.newAPIHadoopRDD(
configuration,
PhoenixInputFormat.class,
NullWritable.class,
DataRecord.class);


Regards,
Parveen Jain




Any pointer why this could be happening.


Regards,

Parveen Jain


Re: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found

Posted by Parveen Jain <pa...@live.com>.
Hi Ankit,
  Thanks for response. My phoenix.query.timeoutMs is already increased to 3660000.
<property>
      <name>phoenix.query.timeoutMs</name>
      <value>3660000</value>
    </property>

Few more questions just to clear my path:
1) Can I use spark to connect phoenix for running queries which have aggregate functions ? I felt I could not but wanted to confirm from someone. If yes/no - can you be more specific when it will give correct result and when not ?
2) Can I try using phoneix- thin client instead of phoneix jdbc to get away from this problem ?
3) id Phoneix - thin client support is available with Apache Spark ?


Regards,

Parveen Jain

________________________________
From: Ankit Singhal <an...@gmail.com>
Sent: Sunday, October 23, 2016 6:43:41 PM
To: user
Subject: Re: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found

You need to increase phoenix timeout as well(phoenix.query.timeoutMs).

https://phoenix.apache.org/tuning.html

On Sun, Oct 23, 2016 at 3:47 PM, Parveen Jain <pa...@live.com>> wrote:

hi All,

I just realized that phoneix doesn't provide "group by" and "distinct" methods if we use phoenix map reduce. It seems below approach uses phoenix map reduce which is not suitable for this type of  queries.

Now I wanted to run below query by any means. My table has more than 70 million records and I could not run it either using "sqlline.py" and also tried to run it using "squirl" as well as using simple phoenix jdbc connection from a java program. In all three I was getting connection timeout error. I tried to increase various timeouts in Hbase (hbase.rpc.timeout -> 3660000, I even checked my hbase file path using "./phoenix_utils.py  | grep hbase_conf_path")

 but no luck. I am ok if my query takes more time but I wanted to run it successfully without any issue.

HBase - 1.1.2.2.4.2.0-258
Phoenix - phoenix-4.4.0.2.4.2.0-258

Can any one provide any suggestion ?

Regards,
Parveen Jain

________________________________
From: Parveen Jain <pa...@live.com>>
Sent: Sunday, October 23, 2016 10:18 AM
To: user@phoenix.apache.org<ma...@phoenix.apache.org>
Subject: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found


While running this query from spark phoenix connector:


select distinct(C_TXN.CUSTMR_ID) from CUS_TXN where (CUS_TXN.TXN_TYPE='xxxx') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>=300 union all select distinct(CUS_TXN.CUSTMR_ID) from CUS_TXN where (CUS_TXN.TXN_TYPE='yyyy') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>100



getting below exception:

Getting some phoenix exception for below query:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found, got: hbase:namespace.
        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(Phoen



my code for fetching records is:

PhoenixConfigurationUtil.setInputTableName(configuration , TABLE_NAME);
PhoenixConfigurationUtil.setOutputTableName(configuration ,TABLE_NAME);
PhoenixConfigurationUtil.setInputQuery(configuration, QueryToRun);
PhoenixConfigurationUtil.setInputClass(configuration, DataRecord.class);

configuration.setClass(JobContext.OUTPUT_FORMAT_CLASS_ATTR,PhoenixOutputFormat.class, OutputFormat.class);

@SuppressWarnings("unchecked")
JavaPairRDD<NullWritable, DataRecord> stocksRDD = jsc.newAPIHadoopRDD(
configuration,
PhoenixInputFormat.class,
NullWritable.class,
DataRecord.class);


Regards,
Parveen Jain




Any pointer why this could be happening.


Regards,

Parveen Jain



Re: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found

Posted by Ankit Singhal <an...@gmail.com>.
You need to increase phoenix timeout as well(phoenix.query.timeoutMs).

https://phoenix.apache.org/tuning.html

On Sun, Oct 23, 2016 at 3:47 PM, Parveen Jain <pa...@live.com> wrote:

> hi All,
>
> I just realized that phoneix doesn't provide "group by" and "distinct"
> methods if we use phoenix map reduce. It seems below approach uses phoenix
> map reduce which is not suitable for this type of  queries.
>
> Now I wanted to run below query by any means. My table has more than 70
> million records and I could not run it either using "sqlline.py" and also
> tried to run it using "squirl" as well as using simple phoenix jdbc
> connection from a java program. In all three I was getting connection
> timeout error. I tried to increase various timeouts in Hbase (hbase.rpc.timeout
> -> 3660000, I even checked my hbase file path using "./phoenix_utils.py
>  | grep hbase_conf_path")
>  but no luck. I am ok if my query takes more time but I wanted to run it
> successfully without any issue.
>
> HBase - 1.1.2.2.4.2.0-258
> Phoenix - phoenix-4.4.0.2.4.2.0-258
>
> Can any one provide any suggestion ?
>
> Regards,
> Parveen Jain
>
> ------------------------------
> *From:* Parveen Jain <pa...@live.com>
> *Sent:* Sunday, October 23, 2016 10:18 AM
> *To:* user@phoenix.apache.org
> *Subject:* PhoenixIOException: Table 'unionSchemaName.unionTableName' was
> not found
>
>
> While running this query from spark phoenix connector:
>
>
> select distinct(C_TXN.CUSTMR_ID) from CUS_TXN where
> (CUS_TXN.TXN_TYPE='xxxx') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101')
> group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>=300 union all
> select distinct(CUS_TXN.CUSTMR_ID) from CUS_TXN where
> (CUS_TXN.TXN_TYPE='yyyy') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101')
> group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>100
>
>
>
> getting below exception:
>
> *Getting some phoenix exception for below query:*
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException:
> Table 'unionSchemaName.unionTableName' was not found, got:
> hbase:namespace.
>         at com.google.common.base.Throwables.propagate(Throwables.java:
> 160)
>         at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(
> Phoen
>
>
> my code for fetching records is:
>
> PhoenixConfigurationUtil.setInputTableName(configuration , TABLE_NAME);
> PhoenixConfigurationUtil.setOutputTableName(configuration ,TABLE_NAME);
> PhoenixConfigurationUtil.setInputQuery(configuration, QueryToRun);
> PhoenixConfigurationUtil.setInputClass(configuration, DataRecord.class);
>
> configuration.setClass(JobContext.OUTPUT_FORMAT_CLASS_ATTR,PhoenixOutputFormat.class,
> OutputFormat.class);
>
> @SuppressWarnings("unchecked")
> JavaPairRDD<NullWritable, DataRecord> stocksRDD = jsc.newAPIHadoopRDD(
> configuration,
> PhoenixInputFormat.class,
> NullWritable.class,
> DataRecord.class);
>
>
> Regards,
> Parveen Jain
>
>
>
> Any pointer why this could be happening.
>
>
> Regards,
>
> Parveen Jain
>
>
>

Re: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found

Posted by Parveen Jain <pa...@live.com>.
hi All,

I just realized that phoneix doesn't provide "group by" and "distinct" methods if we use phoenix map reduce. It seems below approach uses phoenix map reduce which is not suitable for this type of  queries.

Now I wanted to run below query by any means. My table has more than 70 million records and I could not run it either using "sqlline.py" and also tried to run it using "squirl" as well as using simple phoenix jdbc connection from a java program. In all three I was getting connection timeout error. I tried to increase various timeouts in Hbase (hbase.rpc.timeout -> 3660000, I even checked my hbase file path using "./phoenix_utils.py  | grep hbase_conf_path")

 but no luck. I am ok if my query takes more time but I wanted to run it successfully without any issue.

HBase - 1.1.2.2.4.2.0-258
Phoenix - phoenix-4.4.0.2.4.2.0-258

Can any one provide any suggestion ?

Regards,
Parveen Jain

________________________________
From: Parveen Jain <pa...@live.com>
Sent: Sunday, October 23, 2016 10:18 AM
To: user@phoenix.apache.org
Subject: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found


While running this query from spark phoenix connector:


select distinct(C_TXN.CUSTMR_ID) from CUS_TXN where (CUS_TXN.TXN_TYPE='xxxx') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>=300 union all select distinct(CUS_TXN.CUSTMR_ID) from CUS_TXN where (CUS_TXN.TXN_TYPE='yyyy') and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having sum(CUS_TXN.TXN_AMOUNT)>100



getting below exception:

Getting some phoenix exception for below query:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found, got: hbase:namespace.
        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(Phoen



my code for fetching records is:

PhoenixConfigurationUtil.setInputTableName(configuration , TABLE_NAME);
PhoenixConfigurationUtil.setOutputTableName(configuration ,TABLE_NAME);
PhoenixConfigurationUtil.setInputQuery(configuration, QueryToRun);
PhoenixConfigurationUtil.setInputClass(configuration, DataRecord.class);

configuration.setClass(JobContext.OUTPUT_FORMAT_CLASS_ATTR,PhoenixOutputFormat.class, OutputFormat.class);

@SuppressWarnings("unchecked")
JavaPairRDD<NullWritable, DataRecord> stocksRDD = jsc.newAPIHadoopRDD(
configuration,
PhoenixInputFormat.class,
NullWritable.class,
DataRecord.class);


Regards,
Parveen Jain




Any pointer why this could be happening.


Regards,

Parveen Jain