You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Timur Shenkao <ts...@timshenkao.su> on 2016/11/30 07:34:05 UTC

Re: Can't read tables written in Spark 2.1 in Spark 2.0 (and earlier)

Hi!

Do you have real HIVE installation?
Have you built Spark 2.1 & Spark 2.0 with HIVE support ( -Phive
-Phive-thriftserver ) ?

It seems that you use "default" Spark's HIVE 1.2.1. Your metadata is stored
in local Derby DB which is visible to concrete Spark installation but not
for all.

On Wed, Nov 30, 2016 at 4:51 AM, Michael Allman <mi...@videoamp.com>
wrote:

> This is not an issue with all tables created in Spark 2.1, though I'm not
> sure why some work and some do not. I have found that a table created as
> such
>
> sql("create table test stored as parquet as select 1")
>
> in Spark 2.1 cannot be read in previous versions of Spark.
>
> Michael
>
>
> > On Nov 29, 2016, at 5:15 PM, Michael Allman <mi...@videoamp.com>
> wrote:
> >
> > Hello,
> >
> > When I try to read from a Hive table created by Spark 2.1 in Spark 2.0
> or earlier, I get an error:
> >
> > java.lang.ClassNotFoundException: Failed to load class for data source:
> hive.
> >
> > Is there a way to get previous versions of Spark to read tables written
> with Spark 2.1?
> >
> > Cheers,
> >
> > Michael
>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>

Re: Can't read tables written in Spark 2.1 in Spark 2.0 (and earlier)

Posted by Reynold Xin <rx...@databricks.com>.
This should fix it: https://github.com/apache/spark/pull/16080



On Wed, Nov 30, 2016 at 10:55 AM, Timur Shenkao <ts...@timshenkao.su> wrote:

> Hello,
>
> Yes, I used hiveContext, sqlContext, sparkSession from Java, Scala,
> Python.
> Via spark-shell, spark-submit, IDE (PyCharm, Intellij IDEA).
> Everything is perfect because I have Hadoop cluster with configured &
> tuned HIVE.
>
> The reason of Michael's error is usually misconfigured or absent HIVE.
> Or may be absence of hive-site.xml in $SPARK_HOME/conf/ directory.
>
> On Wed, Nov 30, 2016 at 9:30 PM, Gourav Sengupta <
> gourav.sengupta@gmail.com> wrote:
>
>> Hi Timur,
>>
>> did you use hiveContext or sqlContext or the spark way mentioned in the
>> http://spark.apache.org/docs/latest/sql-programming-guide.html?
>>
>>
>> Regards,
>> Gourav Sengupta
>>
>> On Wed, Nov 30, 2016 at 5:35 PM, Yin Huai <yh...@databricks.com> wrote:
>>
>>> Hello Michael,
>>>
>>> Thank you for reporting this issue. It will be fixed by
>>> https://github.com/apache/spark/pull/16080.
>>>
>>> Thanks,
>>>
>>> Yin
>>>
>>> On Tue, Nov 29, 2016 at 11:34 PM, Timur Shenkao <ts...@timshenkao.su>
>>> wrote:
>>>
>>>> Hi!
>>>>
>>>> Do you have real HIVE installation?
>>>> Have you built Spark 2.1 & Spark 2.0 with HIVE support ( -Phive
>>>> -Phive-thriftserver ) ?
>>>>
>>>> It seems that you use "default" Spark's HIVE 1.2.1. Your metadata is
>>>> stored in local Derby DB which is visible to concrete Spark installation
>>>> but not for all.
>>>>
>>>> On Wed, Nov 30, 2016 at 4:51 AM, Michael Allman <mi...@videoamp.com>
>>>> wrote:
>>>>
>>>>> This is not an issue with all tables created in Spark 2.1, though I'm
>>>>> not sure why some work and some do not. I have found that a table created
>>>>> as such
>>>>>
>>>>> sql("create table test stored as parquet as select 1")
>>>>>
>>>>> in Spark 2.1 cannot be read in previous versions of Spark.
>>>>>
>>>>> Michael
>>>>>
>>>>>
>>>>> > On Nov 29, 2016, at 5:15 PM, Michael Allman <mi...@videoamp.com>
>>>>> wrote:
>>>>> >
>>>>> > Hello,
>>>>> >
>>>>> > When I try to read from a Hive table created by Spark 2.1 in Spark
>>>>> 2.0 or earlier, I get an error:
>>>>> >
>>>>> > java.lang.ClassNotFoundException: Failed to load class for data
>>>>> source: hive.
>>>>> >
>>>>> > Is there a way to get previous versions of Spark to read tables
>>>>> written with Spark 2.1?
>>>>> >
>>>>> > Cheers,
>>>>> >
>>>>> > Michael
>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Can't read tables written in Spark 2.1 in Spark 2.0 (and earlier)

Posted by Timur Shenkao <ts...@timshenkao.su>.
Hello,

Yes, I used hiveContext, sqlContext, sparkSession from Java, Scala, Python.
Via spark-shell, spark-submit, IDE (PyCharm, Intellij IDEA).
Everything is perfect because I have Hadoop cluster with configured & tuned
HIVE.

The reason of Michael's error is usually misconfigured or absent HIVE.
Or may be absence of hive-site.xml in $SPARK_HOME/conf/ directory.

On Wed, Nov 30, 2016 at 9:30 PM, Gourav Sengupta <go...@gmail.com>
wrote:

> Hi Timur,
>
> did you use hiveContext or sqlContext or the spark way mentioned in the
> http://spark.apache.org/docs/latest/sql-programming-guide.html?
>
>
> Regards,
> Gourav Sengupta
>
> On Wed, Nov 30, 2016 at 5:35 PM, Yin Huai <yh...@databricks.com> wrote:
>
>> Hello Michael,
>>
>> Thank you for reporting this issue. It will be fixed by
>> https://github.com/apache/spark/pull/16080.
>>
>> Thanks,
>>
>> Yin
>>
>> On Tue, Nov 29, 2016 at 11:34 PM, Timur Shenkao <ts...@timshenkao.su>
>> wrote:
>>
>>> Hi!
>>>
>>> Do you have real HIVE installation?
>>> Have you built Spark 2.1 & Spark 2.0 with HIVE support ( -Phive
>>> -Phive-thriftserver ) ?
>>>
>>> It seems that you use "default" Spark's HIVE 1.2.1. Your metadata is
>>> stored in local Derby DB which is visible to concrete Spark installation
>>> but not for all.
>>>
>>> On Wed, Nov 30, 2016 at 4:51 AM, Michael Allman <mi...@videoamp.com>
>>> wrote:
>>>
>>>> This is not an issue with all tables created in Spark 2.1, though I'm
>>>> not sure why some work and some do not. I have found that a table created
>>>> as such
>>>>
>>>> sql("create table test stored as parquet as select 1")
>>>>
>>>> in Spark 2.1 cannot be read in previous versions of Spark.
>>>>
>>>> Michael
>>>>
>>>>
>>>> > On Nov 29, 2016, at 5:15 PM, Michael Allman <mi...@videoamp.com>
>>>> wrote:
>>>> >
>>>> > Hello,
>>>> >
>>>> > When I try to read from a Hive table created by Spark 2.1 in Spark
>>>> 2.0 or earlier, I get an error:
>>>> >
>>>> > java.lang.ClassNotFoundException: Failed to load class for data
>>>> source: hive.
>>>> >
>>>> > Is there a way to get previous versions of Spark to read tables
>>>> written with Spark 2.1?
>>>> >
>>>> > Cheers,
>>>> >
>>>> > Michael
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>>>
>>>>
>>>
>>
>

Re: Can't read tables written in Spark 2.1 in Spark 2.0 (and earlier)

Posted by Gourav Sengupta <go...@gmail.com>.
Hi Timur,

did you use hiveContext or sqlContext or the spark way mentioned in the
http://spark.apache.org/docs/latest/sql-programming-guide.html?


Regards,
Gourav Sengupta

On Wed, Nov 30, 2016 at 5:35 PM, Yin Huai <yh...@databricks.com> wrote:

> Hello Michael,
>
> Thank you for reporting this issue. It will be fixed by
> https://github.com/apache/spark/pull/16080.
>
> Thanks,
>
> Yin
>
> On Tue, Nov 29, 2016 at 11:34 PM, Timur Shenkao <ts...@timshenkao.su> wrote:
>
>> Hi!
>>
>> Do you have real HIVE installation?
>> Have you built Spark 2.1 & Spark 2.0 with HIVE support ( -Phive
>> -Phive-thriftserver ) ?
>>
>> It seems that you use "default" Spark's HIVE 1.2.1. Your metadata is
>> stored in local Derby DB which is visible to concrete Spark installation
>> but not for all.
>>
>> On Wed, Nov 30, 2016 at 4:51 AM, Michael Allman <mi...@videoamp.com>
>> wrote:
>>
>>> This is not an issue with all tables created in Spark 2.1, though I'm
>>> not sure why some work and some do not. I have found that a table created
>>> as such
>>>
>>> sql("create table test stored as parquet as select 1")
>>>
>>> in Spark 2.1 cannot be read in previous versions of Spark.
>>>
>>> Michael
>>>
>>>
>>> > On Nov 29, 2016, at 5:15 PM, Michael Allman <mi...@videoamp.com>
>>> wrote:
>>> >
>>> > Hello,
>>> >
>>> > When I try to read from a Hive table created by Spark 2.1 in Spark 2.0
>>> or earlier, I get an error:
>>> >
>>> > java.lang.ClassNotFoundException: Failed to load class for data
>>> source: hive.
>>> >
>>> > Is there a way to get previous versions of Spark to read tables
>>> written with Spark 2.1?
>>> >
>>> > Cheers,
>>> >
>>> > Michael
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>>
>>>
>>
>

Re: Can't read tables written in Spark 2.1 in Spark 2.0 (and earlier)

Posted by Yin Huai <yh...@databricks.com>.
Hello Michael,

Thank you for reporting this issue. It will be fixed by
https://github.com/apache/spark/pull/16080.

Thanks,

Yin

On Tue, Nov 29, 2016 at 11:34 PM, Timur Shenkao <ts...@timshenkao.su> wrote:

> Hi!
>
> Do you have real HIVE installation?
> Have you built Spark 2.1 & Spark 2.0 with HIVE support ( -Phive
> -Phive-thriftserver ) ?
>
> It seems that you use "default" Spark's HIVE 1.2.1. Your metadata is
> stored in local Derby DB which is visible to concrete Spark installation
> but not for all.
>
> On Wed, Nov 30, 2016 at 4:51 AM, Michael Allman <mi...@videoamp.com>
> wrote:
>
>> This is not an issue with all tables created in Spark 2.1, though I'm not
>> sure why some work and some do not. I have found that a table created as
>> such
>>
>> sql("create table test stored as parquet as select 1")
>>
>> in Spark 2.1 cannot be read in previous versions of Spark.
>>
>> Michael
>>
>>
>> > On Nov 29, 2016, at 5:15 PM, Michael Allman <mi...@videoamp.com>
>> wrote:
>> >
>> > Hello,
>> >
>> > When I try to read from a Hive table created by Spark 2.1 in Spark 2.0
>> or earlier, I get an error:
>> >
>> > java.lang.ClassNotFoundException: Failed to load class for data
>> source: hive.
>> >
>> > Is there a way to get previous versions of Spark to read tables written
>> with Spark 2.1?
>> >
>> > Cheers,
>> >
>> > Michael
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>
>>
>