You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by ge ko <ko...@gmail.com> on 2014/04/15 21:54:32 UTC

Shark: class java.io.IOException: Cannot run program "/bin/java"

Hi,



after starting the shark-shell
via /opt/shark/shark-0.9.1/bin/shark-withinfo -skipRddReload I receive lots
of output, including the exception that /bin/java cannot be executed. But
it is linked to /usr/bin/java ?!?!



root#>ls -al /bin/java

lrwxrwxrwx 1 root root 13 15. Apr 21:45 /bin/java -> /usr/bin/java

root#>/bin/java -version

java version "1.7.0_51"
OpenJDK Runtime Environment (rhel-2.4.4.1.el6_5-x86_64 u51-b02)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)



Starting the shark shell:



[root@hadoop-pg-5 bin]# /opt/shark/shark-0.9.1/bin/shark-withinfo
-skipRddReload
-hiveconf hive.root.logger=INFO,console -skipRddReload
Starting the Shark Command Line Client
14/04/15 21:45:57 WARN conf.HiveConf: DEPRECATED: Configuration property
hive.metastore.local no longer has any effect. Make sure to provide a valid
value for hive.metastore.uris if you are connecting to a remote metastore.
14/04/15 21:45:58 WARN conf.HiveConf: DEPRECATED: Configuration property
hive.metastore.local no longer has any effect. Make sure to provide a valid
value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in
jar:file:/opt/shark/shark-0.9.1/lib_managed/jars/edu.berkeley.cs.shark/hive-common/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties
14/04/15 21:45:58 INFO SessionState:
Logging initialized using configuration in
jar:file:/opt/shark/shark-0.9.1/lib_managed/jars/edu.berkeley.cs.shark/hive-common/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties
Hive history
file=/tmp/root/hive_job_log_root_22574@hadoop-pg-5.cluster_201404152145_159664609.txt
14/04/15 21:45:58 INFO exec.HiveHistory: Hive history
file=/tmp/root/hive_job_log_root_22574@hadoop-pg-5.cluster_201404152145_159664609.txt
14/04/15 21:45:58 WARN conf.HiveConf: DEPRECATED: Configuration property
hive.metastore.local no longer has any effect. Make sure to provide a valid
value for hive.metastore.uris if you are connecting to a remote metastore.
14/04/15 21:45:59 WARN conf.HiveConf: DEPRECATED: Configuration property
hive.metastore.local no longer has any effect. Make sure to provide a valid
value for hive.metastore.uris if you are connecting to a remote metastore.
14/04/15 21:46:00 INFO slf4j.Slf4jLogger: Slf4jLogger started
14/04/15 21:46:00 INFO Remoting: Starting remoting
14/04/15 21:46:00 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://spark@hadoop-pg-5.cluster:38835]
14/04/15 21:46:00 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://spark@hadoop-pg-5.cluster:38835]
14/04/15 21:46:00 INFO spark.SparkEnv: Registering BlockManagerMaster
5,108: [GC 262656K->26899K(1005568K), 0,0409080 secs]
14/04/15 21:46:00 INFO storage.DiskBlockManager: Created local directory at
/tmp/spark-local-20140415214600-9537
14/04/15 21:46:00 INFO storage.MemoryStore: MemoryStore started with
capacity 589.2 MB.
14/04/15 21:46:00 INFO network.ConnectionManager: Bound socket to port
51889 with id = ConnectionManagerId(hadoop-pg-5.cluster,51889)
14/04/15 21:46:00 INFO storage.BlockManagerMaster: Trying to register
BlockManager
14/04/15 21:46:00 INFO storage.BlockManagerMasterActor$BlockManagerInfo:
Registering block manager hadoop-pg-5.cluster:51889 with 589.2 MB RAM
14/04/15 21:46:00 INFO storage.BlockManagerMaster: Registered BlockManager
14/04/15 21:46:00 INFO spark.HttpServer: Starting HTTP Server
14/04/15 21:46:00 INFO server.Server: jetty-7.6.8.v20121106
14/04/15 21:46:00 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:59414
14/04/15 21:46:00 INFO broadcast.HttpBroadcast: Broadcast server started at
http://10.147.210.5:59414
14/04/15 21:46:01 INFO spark.SparkEnv: Registering MapOutputTracker
14/04/15 21:46:01 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-cf56ada9-d950-4abc-a1c3-76fecdc4faa3
14/04/15 21:46:01 INFO spark.HttpServer: Starting HTTP Server
14/04/15 21:46:01 INFO server.Server: jetty-7.6.8.v20121106
14/04/15 21:46:01 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:45689
14/04/15 21:46:01 INFO server.Server: jetty-7.6.8.v20121106
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/storage/rdd,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/storage,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages/stage,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages/pool,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/environment,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/executors,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/metrics/json,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/static,null}
14/04/15 21:46:01 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/,null}
14/04/15 21:46:01 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
14/04/15 21:46:01 INFO ui.SparkUI: Started Spark Web UI at
http://hadoop-pg-5.cluster:4040
14/04/15 21:46:01 INFO client.AppClient$ClientActor: Connecting to master
spark://hadoop-pg-5.cluster:7077...
14/04/15 21:46:01 WARN shark.SharkEnv: Hive Hadoop shims detected local
mode, but Shark is not running locally.
14/04/15 21:46:01 WARN shark.SharkEnv: Setting mapred.job.tracker to
'Spark_1397591161823' (was 'local')
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Connected to
Spark cluster with app ID app-20140415214602-0010
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/0 on worker-20140414105148-hadoop-pg-8.cluster-7078
(hadoop-pg-8.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/0 on hostPort hadoop-pg-8.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/1 on worker-20140414105202-hadoop-pg-9.cluster-7078
(hadoop-pg-9.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/1 on hostPort hadoop-pg-9.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/1 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/0 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/1 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/1"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/1 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/1"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/2 on worker-20140414105202-hadoop-pg-9.cluster-7078
(hadoop-pg-9.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/2 on hostPort hadoop-pg-9.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/0 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/0"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/0 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/0"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/3 on worker-20140414105148-hadoop-pg-8.cluster-7078
(hadoop-pg-8.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/3 on hostPort hadoop-pg-8.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/3 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/2 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/3 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/3"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/3 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/3"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/4 on worker-20140414105148-hadoop-pg-8.cluster-7078
(hadoop-pg-8.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/4 on hostPort hadoop-pg-8.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/2 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/2"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/2 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/2"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/5 on worker-20140414105202-hadoop-pg-9.cluster-7078
(hadoop-pg-9.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/5 on hostPort hadoop-pg-9.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/4 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/5 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/4 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/4"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/4 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/4"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/6 on worker-20140414105148-hadoop-pg-8.cluster-7078
(hadoop-pg-8.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/6 on hostPort hadoop-pg-8.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/5 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/5"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/5 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/5"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/7 on worker-20140414105202-hadoop-pg-9.cluster-7078
(hadoop-pg-9.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/7 on hostPort hadoop-pg-9.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/7 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/6 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/6 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/6"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/6 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/6"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/8 on worker-20140414105148-hadoop-pg-8.cluster-7078
(hadoop-pg-8.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/8 on hostPort hadoop-pg-8.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/7 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/7"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/7 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/7"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/9 on worker-20140414105202-hadoop-pg-9.cluster-7078
(hadoop-pg-9.cluster:7078) with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/9 on hostPort hadoop-pg-9.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/8 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/9 is now RUNNING
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/8 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/8"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/8 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/8"): error=2, No such file or
directory
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
app-20140415214602-0010/10 on
worker-20140414105148-hadoop-pg-8.cluster-7078 (hadoop-pg-8.cluster:7078)
with 2 cores
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
executor ID app-20140415214602-0010/10 on hostPort hadoop-pg-8.cluster:7078
with 2 cores, 2.0 GB RAM
14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
app-20140415214602-0010/9 is now FAILED (class java.io.IOException: Cannot
run program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/9"): error=2, No such file or
directory)
14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
app-20140415214602-0010/9 removed: class java.io.IOException: Cannot run
program "/bin/java" (in directory
"/var/run/spark/work/app-20140415214602-0010/9"): error=2, No such file or
directory
14/04/15 21:46:02 ERROR client.AppClient$ClientActor: Master removed our
application: FAILED; stopping client
14/04/15 21:46:02 WARN cluster.SparkDeploySchedulerBackend: Disconnected
from Spark cluster! Waiting for reconnection...



I do not understand that java error, any hints ?!?!



br, Gerd



using CM,CDH5 incl. Spark parcel

Re: Shark: class java.io.IOException: Cannot run program "/bin/java"

Posted by Gerd Koenig <ko...@googlemail.com>.
thanks Arpit, gotcha ;)


On 16 April 2014 20:08, Arpit Tak <ar...@gmail.com> wrote:

> just set your java class path properly
>
> export JAVA_HOME=/usr/lib/jvm/java-7-..... (somewhat like this...whatever
> version you having)
>
> it will work....
>
> Regards,
> Arpit
>
>
> On Wed, Apr 16, 2014 at 1:24 AM, ge ko <ko...@gmail.com> wrote:
>
>> Hi,
>>
>>
>>
>> after starting the shark-shell
>> via /opt/shark/shark-0.9.1/bin/shark-withinfo -skipRddReload I receive lots
>> of output, including the exception that /bin/java cannot be executed. But
>> it is linked to /usr/bin/java ?!?!
>>
>>
>>
>> root#>ls -al /bin/java
>>
>> lrwxrwxrwx 1 root root 13 15. Apr 21:45 /bin/java -> /usr/bin/java
>>
>> root#>/bin/java -version
>>
>> java version "1.7.0_51"
>> OpenJDK Runtime Environment (rhel-2.4.4.1.el6_5-x86_64 u51-b02)
>> OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
>>
>>
>>
>> Starting the shark shell:
>>
>>
>>
>> [root@hadoop-pg-5 bin]# /opt/shark/shark-0.9.1/bin/shark-withinfo
>> -skipRddReload
>> -hiveconf hive.root.logger=INFO,console -skipRddReload
>> Starting the Shark Command Line Client
>> 14/04/15 21:45:57 WARN conf.HiveConf: DEPRECATED: Configuration property
>> hive.metastore.local no longer has any effect. Make sure to provide a valid
>> value for hive.metastore.uris if you are connecting to a remote metastore.
>> 14/04/15 21:45:58 WARN conf.HiveConf: DEPRECATED: Configuration property
>> hive.metastore.local no longer has any effect. Make sure to provide a valid
>> value for hive.metastore.uris if you are connecting to a remote metastore.
>>
>> Logging initialized using configuration in
>> jar:file:/opt/shark/shark-0.9.1/lib_managed/jars/edu.berkeley.cs.shark/hive-common/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties
>> 14/04/15 21:45:58 INFO SessionState:
>> Logging initialized using configuration in
>> jar:file:/opt/shark/shark-0.9.1/lib_managed/jars/edu.berkeley.cs.shark/hive-common/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties
>> Hive history
>> file=/tmp/root/hive_job_log_root_22574@hadoop-pg-5.cluster_201404152145_159664609.txt
>> 14/04/15 21:45:58 INFO exec.HiveHistory: Hive history
>> file=/tmp/root/hive_job_log_root_22574@hadoop-pg-5.cluster_201404152145_159664609.txt
>> 14/04/15 21:45:58 WARN conf.HiveConf: DEPRECATED: Configuration property
>> hive.metastore.local no longer has any effect. Make sure to provide a valid
>> value for hive.metastore.uris if you are connecting to a remote metastore.
>> 14/04/15 21:45:59 WARN conf.HiveConf: DEPRECATED: Configuration property
>> hive.metastore.local no longer has any effect. Make sure to provide a valid
>> value for hive.metastore.uris if you are connecting to a remote metastore.
>> 14/04/15 21:46:00 INFO slf4j.Slf4jLogger: Slf4jLogger started
>> 14/04/15 21:46:00 INFO Remoting: Starting remoting
>> 14/04/15 21:46:00 INFO Remoting: Remoting started; listening on addresses
>> :[akka.tcp://spark@hadoop-pg-5.cluster:38835]
>> 14/04/15 21:46:00 INFO Remoting: Remoting now listens on addresses:
>> [akka.tcp://spark@hadoop-pg-5.cluster:38835]
>> 14/04/15 21:46:00 INFO spark.SparkEnv: Registering BlockManagerMaster
>> 5,108: [GC 262656K->26899K(1005568K), 0,0409080 secs]
>> 14/04/15 21:46:00 INFO storage.DiskBlockManager: Created local directory
>> at /tmp/spark-local-20140415214600-9537
>> 14/04/15 21:46:00 INFO storage.MemoryStore: MemoryStore started with
>> capacity 589.2 MB.
>> 14/04/15 21:46:00 INFO network.ConnectionManager: Bound socket to port
>> 51889 with id = ConnectionManagerId(hadoop-pg-5.cluster,51889)
>> 14/04/15 21:46:00 INFO storage.BlockManagerMaster: Trying to register
>> BlockManager
>> 14/04/15 21:46:00 INFO storage.BlockManagerMasterActor$BlockManagerInfo:
>> Registering block manager hadoop-pg-5.cluster:51889 with 589.2 MB RAM
>> 14/04/15 21:46:00 INFO storage.BlockManagerMaster: Registered BlockManager
>> 14/04/15 21:46:00 INFO spark.HttpServer: Starting HTTP Server
>> 14/04/15 21:46:00 INFO server.Server: jetty-7.6.8.v20121106
>> 14/04/15 21:46:00 INFO server.AbstractConnector: Started
>> SocketConnector@0.0.0.0:59414
>> 14/04/15 21:46:00 INFO broadcast.HttpBroadcast: Broadcast server started
>> at http://10.147.210.5:59414
>> 14/04/15 21:46:01 INFO spark.SparkEnv: Registering MapOutputTracker
>> 14/04/15 21:46:01 INFO spark.HttpFileServer: HTTP File server directory
>> is /tmp/spark-cf56ada9-d950-4abc-a1c3-76fecdc4faa3
>> 14/04/15 21:46:01 INFO spark.HttpServer: Starting HTTP Server
>> 14/04/15 21:46:01 INFO server.Server: jetty-7.6.8.v20121106
>> 14/04/15 21:46:01 INFO server.AbstractConnector: Started
>> SocketConnector@0.0.0.0:45689
>> 14/04/15 21:46:01 INFO server.Server: jetty-7.6.8.v20121106
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/storage/rdd,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/storage,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/stages/stage,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/stages/pool,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/stages,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/environment,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/executors,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/metrics/json,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/static,null}
>> 14/04/15 21:46:01 INFO handler.ContextHandler: started
>> o.e.j.s.h.ContextHandler{/,null}
>> 14/04/15 21:46:01 INFO server.AbstractConnector: Started
>> SelectChannelConnector@0.0.0.0:4040
>> 14/04/15 21:46:01 INFO ui.SparkUI: Started Spark Web UI at
>> http://hadoop-pg-5.cluster:4040
>> 14/04/15 21:46:01 INFO client.AppClient$ClientActor: Connecting to master
>> spark://hadoop-pg-5.cluster:7077...
>> 14/04/15 21:46:01 WARN shark.SharkEnv: Hive Hadoop shims detected local
>> mode, but Shark is not running locally.
>> 14/04/15 21:46:01 WARN shark.SharkEnv: Setting mapred.job.tracker to
>> 'Spark_1397591161823' (was 'local')
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Connected to
>> Spark cluster with app ID app-20140415214602-0010
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/0 on worker-20140414105148-hadoop-pg-8.cluster-7078
>> (hadoop-pg-8.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/0 on hostPort hadoop-pg-8.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/1 on worker-20140414105202-hadoop-pg-9.cluster-7078
>> (hadoop-pg-9.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/1 on hostPort hadoop-pg-9.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/1 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/0 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/1 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/1"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/1 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/1"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/2 on worker-20140414105202-hadoop-pg-9.cluster-7078
>> (hadoop-pg-9.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/2 on hostPort hadoop-pg-9.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/0 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/0"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/0 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/0"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/3 on worker-20140414105148-hadoop-pg-8.cluster-7078
>> (hadoop-pg-8.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/3 on hostPort hadoop-pg-8.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/3 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/2 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/3 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/3"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/3 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/3"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/4 on worker-20140414105148-hadoop-pg-8.cluster-7078
>> (hadoop-pg-8.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/4 on hostPort hadoop-pg-8.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/2 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/2"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/2 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/2"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/5 on worker-20140414105202-hadoop-pg-9.cluster-7078
>> (hadoop-pg-9.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/5 on hostPort hadoop-pg-9.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/4 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/5 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/4 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/4"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/4 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/4"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/6 on worker-20140414105148-hadoop-pg-8.cluster-7078
>> (hadoop-pg-8.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/6 on hostPort hadoop-pg-8.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/5 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/5"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/5 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/5"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/7 on worker-20140414105202-hadoop-pg-9.cluster-7078
>> (hadoop-pg-9.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/7 on hostPort hadoop-pg-9.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/7 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/6 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/6 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/6"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/6 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/6"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/8 on worker-20140414105148-hadoop-pg-8.cluster-7078
>> (hadoop-pg-8.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/8 on hostPort hadoop-pg-8.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/7 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/7"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/7 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/7"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/9 on worker-20140414105202-hadoop-pg-9.cluster-7078
>> (hadoop-pg-9.cluster:7078) with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/9 on hostPort hadoop-pg-9.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/8 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/9 is now RUNNING
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/8 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/8"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/8 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/8"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
>> app-20140415214602-0010/10 on
>> worker-20140414105148-hadoop-pg-8.cluster-7078 (hadoop-pg-8.cluster:7078)
>> with 2 cores
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20140415214602-0010/10 on hostPort hadoop-pg-8.cluster:7078
>> with 2 cores, 2.0 GB RAM
>> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
>> app-20140415214602-0010/9 is now FAILED (class java.io.IOException: Cannot
>> run program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/9"): error=2, No such file or
>> directory)
>> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20140415214602-0010/9 removed: class java.io.IOException: Cannot run
>> program "/bin/java" (in directory
>> "/var/run/spark/work/app-20140415214602-0010/9"): error=2, No such file or
>> directory
>> 14/04/15 21:46:02 ERROR client.AppClient$ClientActor: Master removed our
>> application: FAILED; stopping client
>> 14/04/15 21:46:02 WARN cluster.SparkDeploySchedulerBackend: Disconnected
>> from Spark cluster! Waiting for reconnection...
>>
>>
>>
>> I do not understand that java error, any hints ?!?!
>>
>>
>>
>> br, Gerd
>>
>>
>>
>> using CM,CDH5 incl. Spark parcel
>>
>
>

Re: Shark: class java.io.IOException: Cannot run program "/bin/java"

Posted by Arpit Tak <ar...@gmail.com>.
just set your java class path properly

export JAVA_HOME=/usr/lib/jvm/java-7-..... (somewhat like this...whatever
version you having)

it will work....

Regards,
Arpit


On Wed, Apr 16, 2014 at 1:24 AM, ge ko <ko...@gmail.com> wrote:

> Hi,
>
>
>
> after starting the shark-shell
> via /opt/shark/shark-0.9.1/bin/shark-withinfo -skipRddReload I receive lots
> of output, including the exception that /bin/java cannot be executed. But
> it is linked to /usr/bin/java ?!?!
>
>
>
> root#>ls -al /bin/java
>
> lrwxrwxrwx 1 root root 13 15. Apr 21:45 /bin/java -> /usr/bin/java
>
> root#>/bin/java -version
>
> java version "1.7.0_51"
> OpenJDK Runtime Environment (rhel-2.4.4.1.el6_5-x86_64 u51-b02)
> OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
>
>
>
> Starting the shark shell:
>
>
>
> [root@hadoop-pg-5 bin]# /opt/shark/shark-0.9.1/bin/shark-withinfo
> -skipRddReload
> -hiveconf hive.root.logger=INFO,console -skipRddReload
> Starting the Shark Command Line Client
> 14/04/15 21:45:57 WARN conf.HiveConf: DEPRECATED: Configuration property
> hive.metastore.local no longer has any effect. Make sure to provide a valid
> value for hive.metastore.uris if you are connecting to a remote metastore.
> 14/04/15 21:45:58 WARN conf.HiveConf: DEPRECATED: Configuration property
> hive.metastore.local no longer has any effect. Make sure to provide a valid
> value for hive.metastore.uris if you are connecting to a remote metastore.
>
> Logging initialized using configuration in
> jar:file:/opt/shark/shark-0.9.1/lib_managed/jars/edu.berkeley.cs.shark/hive-common/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties
> 14/04/15 21:45:58 INFO SessionState:
> Logging initialized using configuration in
> jar:file:/opt/shark/shark-0.9.1/lib_managed/jars/edu.berkeley.cs.shark/hive-common/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties
> Hive history
> file=/tmp/root/hive_job_log_root_22574@hadoop-pg-5.cluster_201404152145_159664609.txt
> 14/04/15 21:45:58 INFO exec.HiveHistory: Hive history
> file=/tmp/root/hive_job_log_root_22574@hadoop-pg-5.cluster_201404152145_159664609.txt
> 14/04/15 21:45:58 WARN conf.HiveConf: DEPRECATED: Configuration property
> hive.metastore.local no longer has any effect. Make sure to provide a valid
> value for hive.metastore.uris if you are connecting to a remote metastore.
> 14/04/15 21:45:59 WARN conf.HiveConf: DEPRECATED: Configuration property
> hive.metastore.local no longer has any effect. Make sure to provide a valid
> value for hive.metastore.uris if you are connecting to a remote metastore.
> 14/04/15 21:46:00 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 14/04/15 21:46:00 INFO Remoting: Starting remoting
> 14/04/15 21:46:00 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://spark@hadoop-pg-5.cluster:38835]
> 14/04/15 21:46:00 INFO Remoting: Remoting now listens on addresses:
> [akka.tcp://spark@hadoop-pg-5.cluster:38835]
> 14/04/15 21:46:00 INFO spark.SparkEnv: Registering BlockManagerMaster
> 5,108: [GC 262656K->26899K(1005568K), 0,0409080 secs]
> 14/04/15 21:46:00 INFO storage.DiskBlockManager: Created local directory
> at /tmp/spark-local-20140415214600-9537
> 14/04/15 21:46:00 INFO storage.MemoryStore: MemoryStore started with
> capacity 589.2 MB.
> 14/04/15 21:46:00 INFO network.ConnectionManager: Bound socket to port
> 51889 with id = ConnectionManagerId(hadoop-pg-5.cluster,51889)
> 14/04/15 21:46:00 INFO storage.BlockManagerMaster: Trying to register
> BlockManager
> 14/04/15 21:46:00 INFO storage.BlockManagerMasterActor$BlockManagerInfo:
> Registering block manager hadoop-pg-5.cluster:51889 with 589.2 MB RAM
> 14/04/15 21:46:00 INFO storage.BlockManagerMaster: Registered BlockManager
> 14/04/15 21:46:00 INFO spark.HttpServer: Starting HTTP Server
> 14/04/15 21:46:00 INFO server.Server: jetty-7.6.8.v20121106
> 14/04/15 21:46:00 INFO server.AbstractConnector: Started
> SocketConnector@0.0.0.0:59414
> 14/04/15 21:46:00 INFO broadcast.HttpBroadcast: Broadcast server started
> at http://10.147.210.5:59414
> 14/04/15 21:46:01 INFO spark.SparkEnv: Registering MapOutputTracker
> 14/04/15 21:46:01 INFO spark.HttpFileServer: HTTP File server directory is
> /tmp/spark-cf56ada9-d950-4abc-a1c3-76fecdc4faa3
> 14/04/15 21:46:01 INFO spark.HttpServer: Starting HTTP Server
> 14/04/15 21:46:01 INFO server.Server: jetty-7.6.8.v20121106
> 14/04/15 21:46:01 INFO server.AbstractConnector: Started
> SocketConnector@0.0.0.0:45689
> 14/04/15 21:46:01 INFO server.Server: jetty-7.6.8.v20121106
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/storage/rdd,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/storage,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/stages/stage,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/stages/pool,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/stages,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/environment,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/executors,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/metrics/json,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/static,null}
> 14/04/15 21:46:01 INFO handler.ContextHandler: started
> o.e.j.s.h.ContextHandler{/,null}
> 14/04/15 21:46:01 INFO server.AbstractConnector: Started
> SelectChannelConnector@0.0.0.0:4040
> 14/04/15 21:46:01 INFO ui.SparkUI: Started Spark Web UI at
> http://hadoop-pg-5.cluster:4040
> 14/04/15 21:46:01 INFO client.AppClient$ClientActor: Connecting to master
> spark://hadoop-pg-5.cluster:7077...
> 14/04/15 21:46:01 WARN shark.SharkEnv: Hive Hadoop shims detected local
> mode, but Shark is not running locally.
> 14/04/15 21:46:01 WARN shark.SharkEnv: Setting mapred.job.tracker to
> 'Spark_1397591161823' (was 'local')
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Connected to
> Spark cluster with app ID app-20140415214602-0010
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/0 on worker-20140414105148-hadoop-pg-8.cluster-7078
> (hadoop-pg-8.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/0 on hostPort hadoop-pg-8.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/1 on worker-20140414105202-hadoop-pg-9.cluster-7078
> (hadoop-pg-9.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/1 on hostPort hadoop-pg-9.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/1 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/0 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/1 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/1"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/1 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/1"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/2 on worker-20140414105202-hadoop-pg-9.cluster-7078
> (hadoop-pg-9.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/2 on hostPort hadoop-pg-9.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/0 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/0"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/0 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/0"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/3 on worker-20140414105148-hadoop-pg-8.cluster-7078
> (hadoop-pg-8.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/3 on hostPort hadoop-pg-8.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/3 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/2 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/3 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/3"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/3 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/3"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/4 on worker-20140414105148-hadoop-pg-8.cluster-7078
> (hadoop-pg-8.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/4 on hostPort hadoop-pg-8.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/2 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/2"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/2 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/2"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/5 on worker-20140414105202-hadoop-pg-9.cluster-7078
> (hadoop-pg-9.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/5 on hostPort hadoop-pg-9.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/4 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/5 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/4 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/4"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/4 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/4"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/6 on worker-20140414105148-hadoop-pg-8.cluster-7078
> (hadoop-pg-8.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/6 on hostPort hadoop-pg-8.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/5 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/5"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/5 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/5"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/7 on worker-20140414105202-hadoop-pg-9.cluster-7078
> (hadoop-pg-9.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/7 on hostPort hadoop-pg-9.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/7 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/6 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/6 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/6"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/6 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/6"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/8 on worker-20140414105148-hadoop-pg-8.cluster-7078
> (hadoop-pg-8.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/8 on hostPort hadoop-pg-8.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/7 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/7"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/7 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/7"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/9 on worker-20140414105202-hadoop-pg-9.cluster-7078
> (hadoop-pg-9.cluster:7078) with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/9 on hostPort hadoop-pg-9.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/8 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/9 is now RUNNING
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/8 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/8"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/8 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/8"): error=2, No such file or
> directory
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor added:
> app-20140415214602-0010/10 on
> worker-20140414105148-hadoop-pg-8.cluster-7078 (hadoop-pg-8.cluster:7078)
> with 2 cores
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Granted
> executor ID app-20140415214602-0010/10 on hostPort hadoop-pg-8.cluster:7078
> with 2 cores, 2.0 GB RAM
> 14/04/15 21:46:02 INFO client.AppClient$ClientActor: Executor updated:
> app-20140415214602-0010/9 is now FAILED (class java.io.IOException: Cannot
> run program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/9"): error=2, No such file or
> directory)
> 14/04/15 21:46:02 INFO cluster.SparkDeploySchedulerBackend: Executor
> app-20140415214602-0010/9 removed: class java.io.IOException: Cannot run
> program "/bin/java" (in directory
> "/var/run/spark/work/app-20140415214602-0010/9"): error=2, No such file or
> directory
> 14/04/15 21:46:02 ERROR client.AppClient$ClientActor: Master removed our
> application: FAILED; stopping client
> 14/04/15 21:46:02 WARN cluster.SparkDeploySchedulerBackend: Disconnected
> from Spark cluster! Waiting for reconnection...
>
>
>
> I do not understand that java error, any hints ?!?!
>
>
>
> br, Gerd
>
>
>
> using CM,CDH5 incl. Spark parcel
>