You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Nan Zhu <zh...@gmail.com> on 2014/07/13 14:56:37 UTC

how to run the program compiled with spark 1.0.0 in the branch-0.1-jdbc cluster

Hi, all  

I’m trying the JDBC server, so the cluster is running the version compiled from branch-0.1-jdbc  

Unfortunately (and as expected), it cannot run the programs compiled with the dependency on spark 1.0 (i.e. download from maven)

1. The first error I met is the different SerializationVersionUID in ExecuterStatus  

I resolved by explicitly declare SerializationVersionUID in ExecuterStatus.scala and recompile branch-0.1-jdbc

2. Then I start the program compiled with spark-1.0, what I met is  

14/07/13 05:08:11 WARN AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster@172.31.*.*:*: java.util.NoSuchElementException: key not found: 6  
14/07/13 05:08:11 WARN AppClient$ClientActor: Connection to akka.tcp://sparkMaster@172.31.*.*:* failed; waiting for master to reconnect...



I don’t understand how "key not found: 6” comes



Also I tried to start JDBC server with spark-1.0 cluster, after resolving different SerializationVersionUID, what I met is that when I use beeline to run “show tables;”, it shows some executors get lost and tasks failed for unknown reason

Anyone can give some suggestions on how to make spark-1.0 cluster work with JDBC?  

(maybe I need to have a internal maven repo and change all spark dependency to that?)

Best,

--  
Nan Zhu


Re: how to run the program compiled with spark 1.0.0 in the branch-0.1-jdbc cluster

Posted by Nan Zhu <zh...@gmail.com>.
I resolved the issue by setting an internal maven repository to contain the Spark-1.0.1 jar compiled from branch-0.1-jdbc and replacing the dependency to the central repository with our own repository 

I believe there should be some more lightweight way

Best, 

-- 
Nan Zhu


On Monday, July 14, 2014 at 6:36 AM, Nan Zhu wrote:

> Ah, sorry, sorry
> 
> It's executorState under deploy package
> 
> On Monday, July 14, 2014, Patrick Wendell <pwendell@gmail.com (mailto:pwendell@gmail.com)> wrote:
> > > 1. The first error I met is the different SerializationVersionUID in ExecuterStatus
> > >
> > > I resolved by explicitly declare SerializationVersionUID in ExecuterStatus.scala and recompile branch-0.1-jdbc
> > >
> > 
> > I don't think there is a class in Spark named ExecuterStatus (sic) ...
> > or ExecutorStatus. Is this a class you made?


Re: how to run the program compiled with spark 1.0.0 in the branch-0.1-jdbc cluster

Posted by Nan Zhu <zh...@gmail.com>.
Ah, sorry, sorry

It's executorState under deploy package

On Monday, July 14, 2014, Patrick Wendell <pw...@gmail.com> wrote:

> > 1. The first error I met is the different SerializationVersionUID in
> ExecuterStatus
> >
> > I resolved by explicitly declare SerializationVersionUID in
> ExecuterStatus.scala and recompile branch-0.1-jdbc
> >
>
> I don't think there is a class in Spark named ExecuterStatus (sic) ...
> or ExecutorStatus. Is this a class you made?
>

Re: how to run the program compiled with spark 1.0.0 in the branch-0.1-jdbc cluster

Posted by Patrick Wendell <pw...@gmail.com>.
> 1. The first error I met is the different SerializationVersionUID in ExecuterStatus
>
> I resolved by explicitly declare SerializationVersionUID in ExecuterStatus.scala and recompile branch-0.1-jdbc
>

I don't think there is a class in Spark named ExecuterStatus (sic) ...
or ExecutorStatus. Is this a class you made?