You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Jeff Zhang <zj...@gmail.com> on 2015/11/16 09:03:56 UTC

Does anyone meet the issue that jars under lib_managed is never downloaded ?

Sometimes, the jars under lib_managed is missing. And after I rebuild the
spark, the jars under lib_managed is still not downloaded. This would cause
the spark-shell fail due to jars missing. Anyone has hit this weird issue ?



-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Jeff Zhang <zj...@gmail.com>.
Created https://issues.apache.org/jira/browse/SPARK-11798



On Wed, Nov 18, 2015 at 9:42 AM, Josh Rosen <jo...@databricks.com>
wrote:

> Can you file a JIRA issue to help me triage this further? Thanks!
>
> On Tue, Nov 17, 2015 at 4:08 PM Jeff Zhang <zj...@gmail.com> wrote:
>
>> Sure, hive profile is enabled.
>>
>> On Wed, Nov 18, 2015 at 6:12 AM, Josh Rosen <jo...@databricks.com>
>> wrote:
>>
>>> Is the Hive profile enabled? I think it may need to be turned on in
>>> order for those JARs to be deployed.
>>>
>>> On Tue, Nov 17, 2015 at 2:27 AM Jeff Zhang <zj...@gmail.com> wrote:
>>>
>>>> BTW, After I revert  SPARK-7841, I can see all the jars under
>>>> lib_managed/jars
>>>>
>>>> On Tue, Nov 17, 2015 at 2:46 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>
>>>>> Hi Josh,
>>>>>
>>>>> I notice the comments in https://github.com/apache/spark/pull/9575 said
>>>>> that Datanucleus related jars will still be copied to
>>>>> lib_managed/jars. But I don't see any jars under lib_managed/jars.
>>>>> The weird thing is that I see the jars on another machine, but could not
>>>>> see jars on my laptop even after I delete the whole spark project and start
>>>>> from scratch. Does it related with environments ? I try to add the
>>>>> following code in SparkBuild.scala to track the issue, it shows that the
>>>>> jars is empty. Any thoughts on that ?
>>>>>
>>>>>
>>>>> deployDatanucleusJars := {
>>>>>       val jars: Seq[File] = (fullClasspath in
>>>>> assembly).value.map(_.data)
>>>>>         .filter(_.getPath.contains("org.datanucleus"))
>>>>>       // this is what I added
>>>>>       println("*********************************************")
>>>>>       println("fullClasspath:"+fullClasspath)
>>>>>       println("assembly:"+assembly)
>>>>>       println("jars:"+jars.map(_.getAbsolutePath()).mkString(","))
>>>>>       //
>>>>>
>>>>>
>>>>> On Mon, Nov 16, 2015 at 4:51 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>>
>>>>>> This is the exception I got
>>>>>>
>>>>>> 15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating
>>>>>> default database after error: Class
>>>>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>>>>> javax.jdo.JDOFatalUserException: Class
>>>>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>>>>> at
>>>>>> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
>>>>>> at
>>>>>> javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>>>>>> at
>>>>>> javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
>>>>>> at
>>>>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>>>>>> at
>>>>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
>>>>>> at
>>>>>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>>>>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>>>> Method)
>>>>>> at
>>>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>>>>> at
>>>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>>>>>>
>>>>>> On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>>>
>>>>>>> It's about the datanucleus related jars which is needed by spark
>>>>>>> sql. Without these jars, I could not call data frame related api ( I make
>>>>>>> HiveContext enabled)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <
>>>>>>> joshrosen@databricks.com> wrote:
>>>>>>>
>>>>>>>> As of https://github.com/apache/spark/pull/9575, Spark's build
>>>>>>>> will no longer place every dependency JAR into lib_managed. Can you say
>>>>>>>> more about how this affected spark-shell for you (maybe share a stacktrace)?
>>>>>>>>
>>>>>>>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Sometimes, the jars under lib_managed is missing. And after I
>>>>>>>>> rebuild the spark, the jars under lib_managed is still not downloaded. This
>>>>>>>>> would cause the spark-shell fail due to jars missing. Anyone has hit this
>>>>>>>>> weird issue ?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Best Regards
>>>>>>>>>
>>>>>>>>> Jeff Zhang
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards
>>>>>>>
>>>>>>> Jeff Zhang
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>>
>>>>>> Jeff Zhang
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards
>>>>>
>>>>> Jeff Zhang
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>


-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Josh Rosen <jo...@databricks.com>.
Can you file a JIRA issue to help me triage this further? Thanks!

On Tue, Nov 17, 2015 at 4:08 PM Jeff Zhang <zj...@gmail.com> wrote:

> Sure, hive profile is enabled.
>
> On Wed, Nov 18, 2015 at 6:12 AM, Josh Rosen <jo...@databricks.com>
> wrote:
>
>> Is the Hive profile enabled? I think it may need to be turned on in order
>> for those JARs to be deployed.
>>
>> On Tue, Nov 17, 2015 at 2:27 AM Jeff Zhang <zj...@gmail.com> wrote:
>>
>>> BTW, After I revert  SPARK-7841, I can see all the jars under
>>> lib_managed/jars
>>>
>>> On Tue, Nov 17, 2015 at 2:46 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>
>>>> Hi Josh,
>>>>
>>>> I notice the comments in https://github.com/apache/spark/pull/9575 said
>>>> that Datanucleus related jars will still be copied to
>>>> lib_managed/jars. But I don't see any jars under lib_managed/jars.
>>>> The weird thing is that I see the jars on another machine, but could not
>>>> see jars on my laptop even after I delete the whole spark project and start
>>>> from scratch. Does it related with environments ? I try to add the
>>>> following code in SparkBuild.scala to track the issue, it shows that the
>>>> jars is empty. Any thoughts on that ?
>>>>
>>>>
>>>> deployDatanucleusJars := {
>>>>       val jars: Seq[File] = (fullClasspath in
>>>> assembly).value.map(_.data)
>>>>         .filter(_.getPath.contains("org.datanucleus"))
>>>>       // this is what I added
>>>>       println("*********************************************")
>>>>       println("fullClasspath:"+fullClasspath)
>>>>       println("assembly:"+assembly)
>>>>       println("jars:"+jars.map(_.getAbsolutePath()).mkString(","))
>>>>       //
>>>>
>>>>
>>>> On Mon, Nov 16, 2015 at 4:51 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>
>>>>> This is the exception I got
>>>>>
>>>>> 15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating
>>>>> default database after error: Class
>>>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>>>> javax.jdo.JDOFatalUserException: Class
>>>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>>>> at
>>>>> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
>>>>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>>>>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
>>>>> at
>>>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>>>>> at
>>>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
>>>>> at
>>>>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>>>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>>> Method)
>>>>> at
>>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>>>> at
>>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>>>>>
>>>>> On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>>
>>>>>> It's about the datanucleus related jars which is needed by spark sql.
>>>>>> Without these jars, I could not call data frame related api ( I make
>>>>>> HiveContext enabled)
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <joshrosen@databricks.com
>>>>>> > wrote:
>>>>>>
>>>>>>> As of https://github.com/apache/spark/pull/9575, Spark's build will
>>>>>>> no longer place every dependency JAR into lib_managed. Can you say more
>>>>>>> about how this affected spark-shell for you (maybe share a stacktrace)?
>>>>>>>
>>>>>>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> Sometimes, the jars under lib_managed is missing. And after I
>>>>>>>> rebuild the spark, the jars under lib_managed is still not downloaded. This
>>>>>>>> would cause the spark-shell fail due to jars missing. Anyone has hit this
>>>>>>>> weird issue ?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Best Regards
>>>>>>>>
>>>>>>>> Jeff Zhang
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>>
>>>>>> Jeff Zhang
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards
>>>>>
>>>>> Jeff Zhang
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>
>
> --
> Best Regards
>
> Jeff Zhang
>

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Jeff Zhang <zj...@gmail.com>.
Sure, hive profile is enabled.

On Wed, Nov 18, 2015 at 6:12 AM, Josh Rosen <jo...@databricks.com>
wrote:

> Is the Hive profile enabled? I think it may need to be turned on in order
> for those JARs to be deployed.
>
> On Tue, Nov 17, 2015 at 2:27 AM Jeff Zhang <zj...@gmail.com> wrote:
>
>> BTW, After I revert  SPARK-7841, I can see all the jars under
>> lib_managed/jars
>>
>> On Tue, Nov 17, 2015 at 2:46 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>
>>> Hi Josh,
>>>
>>> I notice the comments in https://github.com/apache/spark/pull/9575 said
>>> that Datanucleus related jars will still be copied to lib_managed/jars.
>>> But I don't see any jars under lib_managed/jars. The weird thing is that I
>>> see the jars on another machine, but could not see jars on my laptop even
>>> after I delete the whole spark project and start from scratch. Does it
>>> related with environments ? I try to add the following code in
>>> SparkBuild.scala to track the issue, it shows that the jars is empty. Any
>>> thoughts on that ?
>>>
>>>
>>> deployDatanucleusJars := {
>>>       val jars: Seq[File] = (fullClasspath in assembly).value.map(_.data)
>>>         .filter(_.getPath.contains("org.datanucleus"))
>>>       // this is what I added
>>>       println("*********************************************")
>>>       println("fullClasspath:"+fullClasspath)
>>>       println("assembly:"+assembly)
>>>       println("jars:"+jars.map(_.getAbsolutePath()).mkString(","))
>>>       //
>>>
>>>
>>> On Mon, Nov 16, 2015 at 4:51 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>
>>>> This is the exception I got
>>>>
>>>> 15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating
>>>> default database after error: Class
>>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>>> javax.jdo.JDOFatalUserException: Class
>>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>>> at
>>>> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
>>>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>>>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>>>> at
>>>> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
>>>> at
>>>> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
>>>> at
>>>> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
>>>> at
>>>> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
>>>> at
>>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>>>> at
>>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>>>> at
>>>> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
>>>> at
>>>> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>>>> at
>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
>>>> at
>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
>>>> at
>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
>>>> at
>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
>>>> at
>>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
>>>> at
>>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>>>> at
>>>> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
>>>> at
>>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
>>>> at
>>>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>>> at
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>> at
>>>> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
>>>> at
>>>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>>>>
>>>> On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>
>>>>> It's about the datanucleus related jars which is needed by spark sql.
>>>>> Without these jars, I could not call data frame related api ( I make
>>>>> HiveContext enabled)
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <jo...@databricks.com>
>>>>> wrote:
>>>>>
>>>>>> As of https://github.com/apache/spark/pull/9575, Spark's build will
>>>>>> no longer place every dependency JAR into lib_managed. Can you say more
>>>>>> about how this affected spark-shell for you (maybe share a stacktrace)?
>>>>>>
>>>>>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> Sometimes, the jars under lib_managed is missing. And after I
>>>>>>> rebuild the spark, the jars under lib_managed is still not downloaded. This
>>>>>>> would cause the spark-shell fail due to jars missing. Anyone has hit this
>>>>>>> weird issue ?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards
>>>>>>>
>>>>>>> Jeff Zhang
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards
>>>>>
>>>>> Jeff Zhang
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>


-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Josh Rosen <jo...@databricks.com>.
Is the Hive profile enabled? I think it may need to be turned on in order
for those JARs to be deployed.
On Tue, Nov 17, 2015 at 2:27 AM Jeff Zhang <zj...@gmail.com> wrote:

> BTW, After I revert  SPARK-7841, I can see all the jars under
> lib_managed/jars
>
> On Tue, Nov 17, 2015 at 2:46 PM, Jeff Zhang <zj...@gmail.com> wrote:
>
>> Hi Josh,
>>
>> I notice the comments in https://github.com/apache/spark/pull/9575 said
>> that Datanucleus related jars will still be copied to lib_managed/jars.
>> But I don't see any jars under lib_managed/jars. The weird thing is that I
>> see the jars on another machine, but could not see jars on my laptop even
>> after I delete the whole spark project and start from scratch. Does it
>> related with environments ? I try to add the following code in
>> SparkBuild.scala to track the issue, it shows that the jars is empty. Any
>> thoughts on that ?
>>
>>
>> deployDatanucleusJars := {
>>       val jars: Seq[File] = (fullClasspath in assembly).value.map(_.data)
>>         .filter(_.getPath.contains("org.datanucleus"))
>>       // this is what I added
>>       println("*********************************************")
>>       println("fullClasspath:"+fullClasspath)
>>       println("assembly:"+assembly)
>>       println("jars:"+jars.map(_.getAbsolutePath()).mkString(","))
>>       //
>>
>>
>> On Mon, Nov 16, 2015 at 4:51 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>
>>> This is the exception I got
>>>
>>> 15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating
>>> default database after error: Class
>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>> javax.jdo.JDOFatalUserException: Class
>>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>>> at
>>> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
>>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>>> at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
>>> at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
>>> at
>>> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
>>> at
>>> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
>>> at
>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>>> at
>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>>> at
>>> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
>>> at
>>> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
>>> at
>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
>>> at
>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
>>> at
>>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> at
>>> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
>>> at
>>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>>>
>>> On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>>
>>>> It's about the datanucleus related jars which is needed by spark sql.
>>>> Without these jars, I could not call data frame related api ( I make
>>>> HiveContext enabled)
>>>>
>>>>
>>>>
>>>> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <jo...@databricks.com>
>>>> wrote:
>>>>
>>>>> As of https://github.com/apache/spark/pull/9575, Spark's build will
>>>>> no longer place every dependency JAR into lib_managed. Can you say more
>>>>> about how this affected spark-shell for you (maybe share a stacktrace)?
>>>>>
>>>>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>> Sometimes, the jars under lib_managed is missing. And after I rebuild
>>>>>> the spark, the jars under lib_managed is still not downloaded. This would
>>>>>> cause the spark-shell fail due to jars missing. Anyone has hit this weird
>>>>>> issue ?
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>>
>>>>>> Jeff Zhang
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Jeff Zhang <zj...@gmail.com>.
BTW, After I revert  SPARK-7841, I can see all the jars under
lib_managed/jars

On Tue, Nov 17, 2015 at 2:46 PM, Jeff Zhang <zj...@gmail.com> wrote:

> Hi Josh,
>
> I notice the comments in https://github.com/apache/spark/pull/9575 said
> that Datanucleus related jars will still be copied to lib_managed/jars.
> But I don't see any jars under lib_managed/jars. The weird thing is that I
> see the jars on another machine, but could not see jars on my laptop even
> after I delete the whole spark project and start from scratch. Does it
> related with environments ? I try to add the following code in
> SparkBuild.scala to track the issue, it shows that the jars is empty. Any
> thoughts on that ?
>
>
> deployDatanucleusJars := {
>       val jars: Seq[File] = (fullClasspath in assembly).value.map(_.data)
>         .filter(_.getPath.contains("org.datanucleus"))
>       // this is what I added
>       println("*********************************************")
>       println("fullClasspath:"+fullClasspath)
>       println("assembly:"+assembly)
>       println("jars:"+jars.map(_.getAbsolutePath()).mkString(","))
>       //
>
>
> On Mon, Nov 16, 2015 at 4:51 PM, Jeff Zhang <zj...@gmail.com> wrote:
>
>> This is the exception I got
>>
>> 15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating default
>> database after error: Class
>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>> javax.jdo.JDOFatalUserException: Class
>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>> at
>> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
>> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>> at
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>> at
>> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
>> at
>> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
>> at
>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at
>> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
>> at
>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>>
>> On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>
>>> It's about the datanucleus related jars which is needed by spark sql.
>>> Without these jars, I could not call data frame related api ( I make
>>> HiveContext enabled)
>>>
>>>
>>>
>>> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <jo...@databricks.com>
>>> wrote:
>>>
>>>> As of https://github.com/apache/spark/pull/9575, Spark's build will no
>>>> longer place every dependency JAR into lib_managed. Can you say more about
>>>> how this affected spark-shell for you (maybe share a stacktrace)?
>>>>
>>>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>
>>>>>
>>>>> Sometimes, the jars under lib_managed is missing. And after I rebuild
>>>>> the spark, the jars under lib_managed is still not downloaded. This would
>>>>> cause the spark-shell fail due to jars missing. Anyone has hit this weird
>>>>> issue ?
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards
>>>>>
>>>>> Jeff Zhang
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Jeff Zhang <zj...@gmail.com>.
BTW, After I revert  SPARK-784, I can see all the jars under
lib_managed/jars

On Tue, Nov 17, 2015 at 2:46 PM, Jeff Zhang <zj...@gmail.com> wrote:

> Hi Josh,
>
> I notice the comments in https://github.com/apache/spark/pull/9575 said
> that Datanucleus related jars will still be copied to lib_managed/jars.
> But I don't see any jars under lib_managed/jars. The weird thing is that I
> see the jars on another machine, but could not see jars on my laptop even
> after I delete the whole spark project and start from scratch. Does it
> related with environments ? I try to add the following code in
> SparkBuild.scala to track the issue, it shows that the jars is empty. Any
> thoughts on that ?
>
>
> deployDatanucleusJars := {
>       val jars: Seq[File] = (fullClasspath in assembly).value.map(_.data)
>         .filter(_.getPath.contains("org.datanucleus"))
>       // this is what I added
>       println("*********************************************")
>       println("fullClasspath:"+fullClasspath)
>       println("assembly:"+assembly)
>       println("jars:"+jars.map(_.getAbsolutePath()).mkString(","))
>       //
>
>
> On Mon, Nov 16, 2015 at 4:51 PM, Jeff Zhang <zj...@gmail.com> wrote:
>
>> This is the exception I got
>>
>> 15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating default
>> database after error: Class
>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>> javax.jdo.JDOFatalUserException: Class
>> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
>> at
>> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
>> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
>> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>> at
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>> at
>> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
>> at
>> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
>> at
>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at
>> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
>> at
>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>>
>> On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:
>>
>>> It's about the datanucleus related jars which is needed by spark sql.
>>> Without these jars, I could not call data frame related api ( I make
>>> HiveContext enabled)
>>>
>>>
>>>
>>> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <jo...@databricks.com>
>>> wrote:
>>>
>>>> As of https://github.com/apache/spark/pull/9575, Spark's build will no
>>>> longer place every dependency JAR into lib_managed. Can you say more about
>>>> how this affected spark-shell for you (maybe share a stacktrace)?
>>>>
>>>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com> wrote:
>>>>
>>>>>
>>>>> Sometimes, the jars under lib_managed is missing. And after I rebuild
>>>>> the spark, the jars under lib_managed is still not downloaded. This would
>>>>> cause the spark-shell fail due to jars missing. Anyone has hit this weird
>>>>> issue ?
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards
>>>>>
>>>>> Jeff Zhang
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Jeff Zhang <zj...@gmail.com>.
Hi Josh,

I notice the comments in https://github.com/apache/spark/pull/9575 said
that Datanucleus related jars will still be copied to lib_managed/jars. But
I don't see any jars under lib_managed/jars. The weird thing is that I see
the jars on another machine, but could not see jars on my laptop even after
I delete the whole spark project and start from scratch. Does it related
with environments ? I try to add the following code in SparkBuild.scala to
track the issue, it shows that the jars is empty. Any thoughts on that ?


deployDatanucleusJars := {
      val jars: Seq[File] = (fullClasspath in assembly).value.map(_.data)
        .filter(_.getPath.contains("org.datanucleus"))
      // this is what I added
      println("*********************************************")
      println("fullClasspath:"+fullClasspath)
      println("assembly:"+assembly)
      println("jars:"+jars.map(_.getAbsolutePath()).mkString(","))
      //


On Mon, Nov 16, 2015 at 4:51 PM, Jeff Zhang <zj...@gmail.com> wrote:

> This is the exception I got
>
> 15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating default
> database after error: Class
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
> javax.jdo.JDOFatalUserException: Class
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
> at
> javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
> at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
> at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
> at
> org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
> at
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
> at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
>
> On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:
>
>> It's about the datanucleus related jars which is needed by spark sql.
>> Without these jars, I could not call data frame related api ( I make
>> HiveContext enabled)
>>
>>
>>
>> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <jo...@databricks.com>
>> wrote:
>>
>>> As of https://github.com/apache/spark/pull/9575, Spark's build will no
>>> longer place every dependency JAR into lib_managed. Can you say more about
>>> how this affected spark-shell for you (maybe share a stacktrace)?
>>>
>>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com> wrote:
>>>
>>>>
>>>> Sometimes, the jars under lib_managed is missing. And after I rebuild
>>>> the spark, the jars under lib_managed is still not downloaded. This would
>>>> cause the spark-shell fail due to jars missing. Anyone has hit this weird
>>>> issue ?
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Jeff Zhang <zj...@gmail.com>.
This is the exception I got

15/11/16 16:50:48 WARN metastore.HiveMetaStore: Retrying creating default
database after error: Class
org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
javax.jdo.JDOFatalUserException: Class
org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
at
javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
at
org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
at
org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
at
org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at
org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at
org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)

On Mon, Nov 16, 2015 at 4:47 PM, Jeff Zhang <zj...@gmail.com> wrote:

> It's about the datanucleus related jars which is needed by spark sql.
> Without these jars, I could not call data frame related api ( I make
> HiveContext enabled)
>
>
>
> On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <jo...@databricks.com>
> wrote:
>
>> As of https://github.com/apache/spark/pull/9575, Spark's build will no
>> longer place every dependency JAR into lib_managed. Can you say more about
>> how this affected spark-shell for you (maybe share a stacktrace)?
>>
>> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com> wrote:
>>
>>>
>>> Sometimes, the jars under lib_managed is missing. And after I rebuild
>>> the spark, the jars under lib_managed is still not downloaded. This would
>>> cause the spark-shell fail due to jars missing. Anyone has hit this weird
>>> issue ?
>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Jeff Zhang <zj...@gmail.com>.
It's about the datanucleus related jars which is needed by spark sql.
Without these jars, I could not call data frame related api ( I make
HiveContext enabled)



On Mon, Nov 16, 2015 at 4:10 PM, Josh Rosen <jo...@databricks.com>
wrote:

> As of https://github.com/apache/spark/pull/9575, Spark's build will no
> longer place every dependency JAR into lib_managed. Can you say more about
> how this affected spark-shell for you (maybe share a stacktrace)?
>
> On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com> wrote:
>
>>
>> Sometimes, the jars under lib_managed is missing. And after I rebuild the
>> spark, the jars under lib_managed is still not downloaded. This would cause
>> the spark-shell fail due to jars missing. Anyone has hit this weird issue ?
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>


-- 
Best Regards

Jeff Zhang

Re: Does anyone meet the issue that jars under lib_managed is never downloaded ?

Posted by Josh Rosen <jo...@databricks.com>.
As of https://github.com/apache/spark/pull/9575, Spark's build will no
longer place every dependency JAR into lib_managed. Can you say more about
how this affected spark-shell for you (maybe share a stacktrace)?

On Mon, Nov 16, 2015 at 12:03 AM, Jeff Zhang <zj...@gmail.com> wrote:

>
> Sometimes, the jars under lib_managed is missing. And after I rebuild the
> spark, the jars under lib_managed is still not downloaded. This would cause
> the spark-shell fail due to jars missing. Anyone has hit this weird issue ?
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>