You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Pa Rö <pa...@googlemail.com> on 2015/07/23 13:41:56 UTC

Asked to remove non-existent executor exception

hello spark community,

i have build an application with geomesa, accumulo and spark.
if it run on spark local mode, it is working, but on spark
cluster not. in short it says: No space left on device. Asked to remove
non-existent executor XY.
I´m confused, because there were many GB´s of free space. do i need to
change my configuration or what else can i do? thanks in advance.

here is the complete exception:

og4j:WARN No appenders could be found for logger
(org.apache.accumulo.fate.zookeeper.ZooSession).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(marcel); users
with modify permissions: Set(marcel)
15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
15/07/23 13:26:40 INFO Remoting: Starting remoting
15/07/23 13:26:40 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriver@node1-scads02:52478]
15/07/23 13:26:40 INFO Utils: Successfully started service 'sparkDriver' on
port 52478.
15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at
/tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity
1916.2 MB
15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is
/tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
15/07/23 13:26:40 INFO AbstractConnector: Started
SocketConnector@0.0.0.0:56499
15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file
server' on port 56499.
15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
15/07/23 13:26:40 INFO AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on
port 4040.
15/07/23 13:26:40 INFO SparkUI: Started SparkUI at http://node1-scads02:4040
15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master
akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark
cluster with app ID app-20150723132640-0000
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078
(node3-scads06:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores,
512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078
(node2-scads05:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores,
512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/0 is now RUNNING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/1 is now RUNNING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/0 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/1 is now LOADING
15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on 45786
15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register BlockManager
15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block manager
node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>,
node1-scads02, 45786)
15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left
on device)
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
app-20150723132640-0000/0 removed: java.io.IOException: No space left on
device
15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
non-existent executor 0
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078
(node3-scads06:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores,
512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left
on device)
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
app-20150723132640-0000/1 removed: java.io.IOException: No space left on
device
15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
non-existent executor 1
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078
(node2-scads05:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores,
512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/2 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/3 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/2 is now RUNNING
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is
ready for scheduling beginning after reached minRegisteredResourcesRatio:
0.0
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/3 is now RUNNING
15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated:
app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left
on device)
15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor
app-20150723132640-0000/2 removed: java.io.IOException: No space left on
device
15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
non-existent executor 2...

Re: Asked to remove non-existent executor exception

Posted by Ted Yu <yu...@gmail.com>.
If I read the code correctly, that error message came
from CoarseGrainedSchedulerBackend.

There may be existing / future error messages, other than the one cited
below, which are useful. Maybe change the log level of this message  to
DEBUG ?

Cheers

On Sun, Jul 26, 2015 at 3:28 PM, Mridul Muralidharan <mr...@gmail.com>
wrote:

> Simply customize your log4j confit instead of modifying code if you don't
> want messages from that class.
>
>
> Regards
> Mridul
>
> On Sunday, July 26, 2015, Sea <26...@qq.com> wrote:
>
>> This exception is so ugly!!!  The screen is full of these information
>> when the program runs a long time,  and they will not fail the job.
>>
>> I comment it in the source code. I think this information is useless
>> because the executor is already removed and I don't know what does the
>> executor id mean.
>>
>> Should we remove this information forever?
>>
>>
>>
>>  15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>> non-existent executor 2...
>>
>> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>> non-existent executor 2...
>>
>>
>>
>>
>>
>>
>>
>> ------------------ 原始邮件 ------------------
>>  *发件人:* "Ted Yu";<yu...@gmail.com>;
>> *发送时间:* 2015年7月26日(星期天) 晚上10:51
>> *收件人:* "Pa Rö"<pa...@googlemail.com>;
>> *抄送:* "user"<us...@spark.apache.org>;
>> *主题:* Re: Asked to remove non-existent executor exception
>>
>> You can list the files in tmpfs in reverse chronological order and remove
>> the oldest until you have enough space.
>>
>> Cheers
>>
>> On Sun, Jul 26, 2015 at 12:43 AM, Pa Rö <pa...@googlemail.com>
>> wrote:
>>
>>> i has seen that the "tempfs" is full, how i can clear this?
>>>
>>> 2015-07-23 13:41 GMT+02:00 Pa Rö <pa...@googlemail.com>:
>>>
>>>>   hello spark community,
>>>>
>>>> i have build an application with geomesa, accumulo and spark.
>>>> if it run on spark local mode, it is working, but on spark
>>>> cluster not. in short it says: No space left on device. Asked to remove
>>>> non-existent executor XY.
>>>> I´m confused, because there were many GB´s of free space. do i need to
>>>> change my configuration or what else can i do? thanks in advance.
>>>>
>>>> here is the complete exception:
>>>>
>>>> og4j:WARN No appenders could be found for logger
>>>> (org.apache.accumulo.fate.zookeeper.ZooSession).
>>>> log4j:WARN Please initialize the log4j system properly.
>>>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>>> for more info.
>>>> Using Spark's default log4j profile:
>>>> org/apache/spark/log4j-defaults.properties
>>>> 15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
>>>> 15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop
>>>> library for your platform... using builtin-java classes where applicable
>>>> 15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
>>>> 15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
>>>> 15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication
>>>> disabled; ui acls disabled; users with view permissions: Set(marcel); users
>>>> with modify permissions: Set(marcel)
>>>> 15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
>>>> 15/07/23 13:26:40 INFO Remoting: Starting remoting
>>>> 15/07/23 13:26:40 INFO Remoting: Remoting started; listening on
>>>> addresses :[akka.tcp://sparkDriver@node1-scads02:52478]
>>>> 15/07/23 13:26:40 INFO Utils: Successfully started service
>>>> 'sparkDriver' on port 52478.
>>>> 15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
>>>> 15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
>>>> 15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at
>>>> /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
>>>> 15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity
>>>> 1916.2 MB
>>>> 15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is
>>>> /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
>>>> 15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
>>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>>> SocketConnector@0.0.0.0:56499
>>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file
>>>> server' on port 56499.
>>>> 15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
>>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>>> SelectChannelConnector@0.0.0.0:4040
>>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on
>>>> port 4040.
>>>> 15/07/23 13:26:40 INFO SparkUI: Started SparkUI at
>>>> http://node1-scads02:4040
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master
>>>> akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark
>>>> cluster with app ID app-20150723132640-0000
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078
>>>> (node3-scads06:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078
>>>> (node2-scads05:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/0 is now RUNNING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/1 is now RUNNING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/0 is now LOADING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/1 is now LOADING
>>>> 15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on
>>>> 45786
>>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register
>>>> BlockManager
>>>> 15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block
>>>> manager node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>,
>>>> node1-scads02, 45786)
>>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left
>>>> on device)
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>>> app-20150723132640-0000/0 removed: java.io.IOException: No space left on
>>>> device
>>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>>> non-existent executor 0
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078
>>>> (node3-scads06:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left
>>>> on device)
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>>> app-20150723132640-0000/1 removed: java.io.IOException: No space left on
>>>> device
>>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>>> non-existent executor 1
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078
>>>> (node2-scads05:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/2 is now LOADING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/3 is now LOADING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/2 is now RUNNING
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is
>>>> ready for scheduling beginning after reached minRegisteredResourcesRatio:
>>>> 0.0
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/3 is now RUNNING
>>>> 15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left
>>>> on device)
>>>> 15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor
>>>> app-20150723132640-0000/2 removed: java.io.IOException: No space left on
>>>> device
>>>> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>>>> non-existent executor 2...
>>>>
>>>
>>>
>>

Re: Asked to remove non-existent executor exception

Posted by Ted Yu <yu...@gmail.com>.
If I read the code correctly, that error message came
from CoarseGrainedSchedulerBackend.

There may be existing / future error messages, other than the one cited
below, which are useful. Maybe change the log level of this message  to
DEBUG ?

Cheers

On Sun, Jul 26, 2015 at 3:28 PM, Mridul Muralidharan <mr...@gmail.com>
wrote:

> Simply customize your log4j confit instead of modifying code if you don't
> want messages from that class.
>
>
> Regards
> Mridul
>
> On Sunday, July 26, 2015, Sea <26...@qq.com> wrote:
>
>> This exception is so ugly!!!  The screen is full of these information
>> when the program runs a long time,  and they will not fail the job.
>>
>> I comment it in the source code. I think this information is useless
>> because the executor is already removed and I don't know what does the
>> executor id mean.
>>
>> Should we remove this information forever?
>>
>>
>>
>>  15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>> non-existent executor 2...
>>
>> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>> non-existent executor 2...
>>
>>
>>
>>
>>
>>
>>
>> ------------------ 原始邮件 ------------------
>>  *发件人:* "Ted Yu";<yu...@gmail.com>;
>> *发送时间:* 2015年7月26日(星期天) 晚上10:51
>> *收件人:* "Pa Rö"<pa...@googlemail.com>;
>> *抄送:* "user"<us...@spark.apache.org>;
>> *主题:* Re: Asked to remove non-existent executor exception
>>
>> You can list the files in tmpfs in reverse chronological order and remove
>> the oldest until you have enough space.
>>
>> Cheers
>>
>> On Sun, Jul 26, 2015 at 12:43 AM, Pa Rö <pa...@googlemail.com>
>> wrote:
>>
>>> i has seen that the "tempfs" is full, how i can clear this?
>>>
>>> 2015-07-23 13:41 GMT+02:00 Pa Rö <pa...@googlemail.com>:
>>>
>>>>   hello spark community,
>>>>
>>>> i have build an application with geomesa, accumulo and spark.
>>>> if it run on spark local mode, it is working, but on spark
>>>> cluster not. in short it says: No space left on device. Asked to remove
>>>> non-existent executor XY.
>>>> I´m confused, because there were many GB´s of free space. do i need to
>>>> change my configuration or what else can i do? thanks in advance.
>>>>
>>>> here is the complete exception:
>>>>
>>>> og4j:WARN No appenders could be found for logger
>>>> (org.apache.accumulo.fate.zookeeper.ZooSession).
>>>> log4j:WARN Please initialize the log4j system properly.
>>>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>>> for more info.
>>>> Using Spark's default log4j profile:
>>>> org/apache/spark/log4j-defaults.properties
>>>> 15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
>>>> 15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop
>>>> library for your platform... using builtin-java classes where applicable
>>>> 15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
>>>> 15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
>>>> 15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication
>>>> disabled; ui acls disabled; users with view permissions: Set(marcel); users
>>>> with modify permissions: Set(marcel)
>>>> 15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
>>>> 15/07/23 13:26:40 INFO Remoting: Starting remoting
>>>> 15/07/23 13:26:40 INFO Remoting: Remoting started; listening on
>>>> addresses :[akka.tcp://sparkDriver@node1-scads02:52478]
>>>> 15/07/23 13:26:40 INFO Utils: Successfully started service
>>>> 'sparkDriver' on port 52478.
>>>> 15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
>>>> 15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
>>>> 15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at
>>>> /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
>>>> 15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity
>>>> 1916.2 MB
>>>> 15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is
>>>> /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
>>>> 15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
>>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>>> SocketConnector@0.0.0.0:56499
>>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file
>>>> server' on port 56499.
>>>> 15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
>>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>>> SelectChannelConnector@0.0.0.0:4040
>>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on
>>>> port 4040.
>>>> 15/07/23 13:26:40 INFO SparkUI: Started SparkUI at
>>>> http://node1-scads02:4040
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master
>>>> akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark
>>>> cluster with app ID app-20150723132640-0000
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078
>>>> (node3-scads06:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078
>>>> (node2-scads05:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/0 is now RUNNING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/1 is now RUNNING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/0 is now LOADING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/1 is now LOADING
>>>> 15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on
>>>> 45786
>>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register
>>>> BlockManager
>>>> 15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block
>>>> manager node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>,
>>>> node1-scads02, 45786)
>>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left
>>>> on device)
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>>> app-20150723132640-0000/0 removed: java.io.IOException: No space left on
>>>> device
>>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>>> non-existent executor 0
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078
>>>> (node3-scads06:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left
>>>> on device)
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>>> app-20150723132640-0000/1 removed: java.io.IOException: No space left on
>>>> device
>>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>>> non-existent executor 1
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>>> app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078
>>>> (node2-scads05:7078) with 8 cores
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>>> app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores,
>>>> 512.0 MB RAM
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/2 is now LOADING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/3 is now LOADING
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/2 is now RUNNING
>>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is
>>>> ready for scheduling beginning after reached minRegisteredResourcesRatio:
>>>> 0.0
>>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/3 is now RUNNING
>>>> 15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated:
>>>> app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left
>>>> on device)
>>>> 15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor
>>>> app-20150723132640-0000/2 removed: java.io.IOException: No space left on
>>>> device
>>>> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>>>> non-existent executor 2...
>>>>
>>>
>>>
>>

Re: Asked to remove non-existent executor exception

Posted by Mridul Muralidharan <mr...@gmail.com>.
Simply customize your log4j confit instead of modifying code if you don't
want messages from that class.


Regards
Mridul

On Sunday, July 26, 2015, Sea <26...@qq.com> wrote:

> This exception is so ugly!!!  The screen is full of these information when
> the program runs a long time,  and they will not fail the job.
>
> I comment it in the source code. I think this information is useless
> because the executor is already removed and I don't know what does the
> executor id mean.
>
> Should we remove this information forever?
>
>
>
>  15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
> non-existent executor 2...
>
> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
> non-existent executor 2...
>
>
>
>
>
>
>
> ------------------ 原始邮件 ------------------
>  *发件人:* "Ted Yu";<yuzhihong@gmail.com
> <javascript:_e(%7B%7D,'cvml','yuzhihong@gmail.com');>>;
> *发送时间:* 2015年7月26日(星期天) 晚上10:51
> *收件人:* "Pa Rö"<paul.roewer1990@googlemail.com
> <javascript:_e(%7B%7D,'cvml','paul.roewer1990@googlemail.com');>>;
> *抄送:* "user"<user@spark.apache.org
> <javascript:_e(%7B%7D,'cvml','user@spark.apache.org');>>;
> *主题:* Re: Asked to remove non-existent executor exception
>
> You can list the files in tmpfs in reverse chronological order and remove
> the oldest until you have enough space.
>
> Cheers
>
> On Sun, Jul 26, 2015 at 12:43 AM, Pa Rö <paul.roewer1990@googlemail.com
> <javascript:_e(%7B%7D,'cvml','paul.roewer1990@googlemail.com');>> wrote:
>
>> i has seen that the "tempfs" is full, how i can clear this?
>>
>> 2015-07-23 13:41 GMT+02:00 Pa Rö <paul.roewer1990@googlemail.com
>> <javascript:_e(%7B%7D,'cvml','paul.roewer1990@googlemail.com');>>:
>>
>>>   hello spark community,
>>>
>>> i have build an application with geomesa, accumulo and spark.
>>> if it run on spark local mode, it is working, but on spark
>>> cluster not. in short it says: No space left on device. Asked to remove
>>> non-existent executor XY.
>>> I´m confused, because there were many GB´s of free space. do i need to
>>> change my configuration or what else can i do? thanks in advance.
>>>
>>> here is the complete exception:
>>>
>>> og4j:WARN No appenders could be found for logger
>>> (org.apache.accumulo.fate.zookeeper.ZooSession).
>>> log4j:WARN Please initialize the log4j system properly.
>>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>> for more info.
>>> Using Spark's default log4j profile:
>>> org/apache/spark/log4j-defaults.properties
>>> 15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
>>> 15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop
>>> library for your platform... using builtin-java classes where applicable
>>> 15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
>>> 15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
>>> 15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication
>>> disabled; ui acls disabled; users with view permissions: Set(marcel); users
>>> with modify permissions: Set(marcel)
>>> 15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
>>> 15/07/23 13:26:40 INFO Remoting: Starting remoting
>>> 15/07/23 13:26:40 INFO Remoting: Remoting started; listening on
>>> addresses :[akka.tcp://sparkDriver@node1-scads02:52478]
>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'sparkDriver'
>>> on port 52478.
>>> 15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
>>> 15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
>>> 15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at
>>> /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
>>> 15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity
>>> 1916.2 MB
>>> 15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is
>>> /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
>>> 15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>> SocketConnector@0.0.0.0:56499
>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file
>>> server' on port 56499.
>>> 15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>> SelectChannelConnector@0.0.0.0:4040
>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on
>>> port 4040.
>>> 15/07/23 13:26:40 INFO SparkUI: Started SparkUI at
>>> http://node1-scads02:4040
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master
>>> akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark
>>> cluster with app ID app-20150723132640-0000
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078
>>> (node3-scads06:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078
>>> (node2-scads05:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/0 is now RUNNING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/1 is now RUNNING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/0 is now LOADING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/1 is now LOADING
>>> 15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on 45786
>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register
>>> BlockManager
>>> 15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block
>>> manager node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>,
>>> node1-scads02, 45786)
>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left
>>> on device)
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>> app-20150723132640-0000/0 removed: java.io.IOException: No space left on
>>> device
>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>> non-existent executor 0
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078
>>> (node3-scads06:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left
>>> on device)
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>> app-20150723132640-0000/1 removed: java.io.IOException: No space left on
>>> device
>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>> non-existent executor 1
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078
>>> (node2-scads05:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/2 is now LOADING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/3 is now LOADING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/2 is now RUNNING
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is
>>> ready for scheduling beginning after reached minRegisteredResourcesRatio:
>>> 0.0
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/3 is now RUNNING
>>> 15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left
>>> on device)
>>> 15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor
>>> app-20150723132640-0000/2 removed: java.io.IOException: No space left on
>>> device
>>> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>>> non-existent executor 2...
>>>
>>
>>
>

Re: Asked to remove non-existent executor exception

Posted by Mridul Muralidharan <mr...@gmail.com>.
Simply customize your log4j confit instead of modifying code if you don't
want messages from that class.


Regards
Mridul

On Sunday, July 26, 2015, Sea <26...@qq.com> wrote:

> This exception is so ugly!!!  The screen is full of these information when
> the program runs a long time,  and they will not fail the job.
>
> I comment it in the source code. I think this information is useless
> because the executor is already removed and I don't know what does the
> executor id mean.
>
> Should we remove this information forever?
>
>
>
>  15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
> non-existent executor 2...
>
> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
> non-existent executor 2...
>
>
>
>
>
>
>
> ------------------ 原始邮件 ------------------
>  *发件人:* "Ted Yu";<yuzhihong@gmail.com
> <javascript:_e(%7B%7D,'cvml','yuzhihong@gmail.com');>>;
> *发送时间:* 2015年7月26日(星期天) 晚上10:51
> *收件人:* "Pa Rö"<paul.roewer1990@googlemail.com
> <javascript:_e(%7B%7D,'cvml','paul.roewer1990@googlemail.com');>>;
> *抄送:* "user"<user@spark.apache.org
> <javascript:_e(%7B%7D,'cvml','user@spark.apache.org');>>;
> *主题:* Re: Asked to remove non-existent executor exception
>
> You can list the files in tmpfs in reverse chronological order and remove
> the oldest until you have enough space.
>
> Cheers
>
> On Sun, Jul 26, 2015 at 12:43 AM, Pa Rö <paul.roewer1990@googlemail.com
> <javascript:_e(%7B%7D,'cvml','paul.roewer1990@googlemail.com');>> wrote:
>
>> i has seen that the "tempfs" is full, how i can clear this?
>>
>> 2015-07-23 13:41 GMT+02:00 Pa Rö <paul.roewer1990@googlemail.com
>> <javascript:_e(%7B%7D,'cvml','paul.roewer1990@googlemail.com');>>:
>>
>>>   hello spark community,
>>>
>>> i have build an application with geomesa, accumulo and spark.
>>> if it run on spark local mode, it is working, but on spark
>>> cluster not. in short it says: No space left on device. Asked to remove
>>> non-existent executor XY.
>>> I´m confused, because there were many GB´s of free space. do i need to
>>> change my configuration or what else can i do? thanks in advance.
>>>
>>> here is the complete exception:
>>>
>>> og4j:WARN No appenders could be found for logger
>>> (org.apache.accumulo.fate.zookeeper.ZooSession).
>>> log4j:WARN Please initialize the log4j system properly.
>>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>> for more info.
>>> Using Spark's default log4j profile:
>>> org/apache/spark/log4j-defaults.properties
>>> 15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
>>> 15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop
>>> library for your platform... using builtin-java classes where applicable
>>> 15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
>>> 15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
>>> 15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication
>>> disabled; ui acls disabled; users with view permissions: Set(marcel); users
>>> with modify permissions: Set(marcel)
>>> 15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
>>> 15/07/23 13:26:40 INFO Remoting: Starting remoting
>>> 15/07/23 13:26:40 INFO Remoting: Remoting started; listening on
>>> addresses :[akka.tcp://sparkDriver@node1-scads02:52478]
>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'sparkDriver'
>>> on port 52478.
>>> 15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
>>> 15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
>>> 15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at
>>> /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
>>> 15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity
>>> 1916.2 MB
>>> 15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is
>>> /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
>>> 15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>> SocketConnector@0.0.0.0:56499
>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file
>>> server' on port 56499.
>>> 15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
>>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>>> SelectChannelConnector@0.0.0.0:4040
>>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on
>>> port 4040.
>>> 15/07/23 13:26:40 INFO SparkUI: Started SparkUI at
>>> http://node1-scads02:4040
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master
>>> akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark
>>> cluster with app ID app-20150723132640-0000
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078
>>> (node3-scads06:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078
>>> (node2-scads05:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/0 is now RUNNING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/1 is now RUNNING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/0 is now LOADING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/1 is now LOADING
>>> 15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on 45786
>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register
>>> BlockManager
>>> 15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block
>>> manager node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>,
>>> node1-scads02, 45786)
>>> 15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left
>>> on device)
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>> app-20150723132640-0000/0 removed: java.io.IOException: No space left on
>>> device
>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>> non-existent executor 0
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078
>>> (node3-scads06:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left
>>> on device)
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>>> app-20150723132640-0000/1 removed: java.io.IOException: No space left on
>>> device
>>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>>> non-existent executor 1
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>>> app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078
>>> (node2-scads05:7078) with 8 cores
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>>> app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores,
>>> 512.0 MB RAM
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/2 is now LOADING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/3 is now LOADING
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/2 is now RUNNING
>>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is
>>> ready for scheduling beginning after reached minRegisteredResourcesRatio:
>>> 0.0
>>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/3 is now RUNNING
>>> 15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated:
>>> app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left
>>> on device)
>>> 15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor
>>> app-20150723132640-0000/2 removed: java.io.IOException: No space left on
>>> device
>>> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>>> non-existent executor 2...
>>>
>>
>>
>

回复: Asked to remove non-existent executor exception

Posted by Sea <26...@qq.com>.
This exception is so ugly!!!  The screen is full of these information when the program runs a long time,  and they will not fail the job. 
 
I comment it in the source code. I think this information is useless because the executor is already removed and I don't know what does the executor id mean.
 
Should we remove this information forever?
 
 
 
 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2...
 
15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2...






  

 

 ------------------ 原始邮件 ------------------
  发件人: "Ted Yu";<yu...@gmail.com>;
 发送时间: 2015年7月26日(星期天) 晚上10:51
 收件人: "Pa Rö"<pa...@googlemail.com>; 
 抄送: "user"<us...@spark.apache.org>; 
 主题: Re: Asked to remove non-existent executor exception

 

 You can list the files in tmpfs in reverse chronological order and remove the oldest until you have enough space.  

 Cheers

 
 On Sun, Jul 26, 2015 at 12:43 AM, Pa Rö <pa...@googlemail.com> wrote:
  i has seen that the "tempfs" is full, how i can clear this?
   
 2015-07-23 13:41 GMT+02:00 Pa Rö <pa...@googlemail.com>:
     hello spark community,


i have build an application with geomesa, accumulo and spark.

if it run on spark local mode, it is working, but on spark

cluster not. in short it says: No space left on device. Asked to remove non-existent executor XY. 
I´m confused, because there were many GB´s of free space. do i need to change my configuration or what else can i do? thanks in advance.

here is the complete exception:

og4j:WARN No appenders could be found for logger (org.apache.accumulo.fate.zookeeper.ZooSession).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(marcel); users with modify permissions: Set(marcel)
15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
15/07/23 13:26:40 INFO Remoting: Starting remoting
15/07/23 13:26:40 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@node1-scads02:52478]
15/07/23 13:26:40 INFO Utils: Successfully started service 'sparkDriver' on port 52478.
15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity 1916.2 MB
15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
15/07/23 13:26:40 INFO AbstractConnector: Started SocketConnector@0.0.0.0:56499
15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file server' on port 56499.
15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
15/07/23 13:26:40 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/07/23 13:26:40 INFO SparkUI: Started SparkUI at http://node1-scads02:4040
15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150723132640-0000
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078 (node3-scads06:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078 (node2-scads05:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/0 is now RUNNING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/1 is now RUNNING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/0 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/1 is now LOADING
15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on 45786
15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register BlockManager
15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block manager node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>, node1-scads02, 45786)
15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left on device)
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor app-20150723132640-0000/0 removed: java.io.IOException: No space left on device
15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078 (node3-scads06:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left on device)
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor app-20150723132640-0000/1 removed: java.io.IOException: No space left on device
15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 1
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078 (node2-scads05:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/2 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/3 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/2 is now RUNNING
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/3 is now RUNNING
15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left on device)
15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor app-20150723132640-0000/2 removed: java.io.IOException: No space left on device
15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2...

回复: Asked to remove non-existent executor exception

Posted by Sea <26...@qq.com>.
This exception is so ugly!!!  The screen is full of these information when the program runs a long time,  and they will not fail the job. 
 
I comment it in the source code. I think this information is useless because the executor is already removed and I don't know what does the executor id mean.
 
Should we remove this information forever?
 
 
 
 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2...
 
15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2...






  

 

 ------------------ 原始邮件 ------------------
  发件人: "Ted Yu";<yu...@gmail.com>;
 发送时间: 2015年7月26日(星期天) 晚上10:51
 收件人: "Pa Rö"<pa...@googlemail.com>; 
 抄送: "user"<us...@spark.apache.org>; 
 主题: Re: Asked to remove non-existent executor exception

 

 You can list the files in tmpfs in reverse chronological order and remove the oldest until you have enough space.  

 Cheers

 
 On Sun, Jul 26, 2015 at 12:43 AM, Pa Rö <pa...@googlemail.com> wrote:
  i has seen that the "tempfs" is full, how i can clear this?
   
 2015-07-23 13:41 GMT+02:00 Pa Rö <pa...@googlemail.com>:
     hello spark community,


i have build an application with geomesa, accumulo and spark.

if it run on spark local mode, it is working, but on spark

cluster not. in short it says: No space left on device. Asked to remove non-existent executor XY. 
I´m confused, because there were many GB´s of free space. do i need to change my configuration or what else can i do? thanks in advance.

here is the complete exception:

og4j:WARN No appenders could be found for logger (org.apache.accumulo.fate.zookeeper.ZooSession).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(marcel); users with modify permissions: Set(marcel)
15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
15/07/23 13:26:40 INFO Remoting: Starting remoting
15/07/23 13:26:40 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@node1-scads02:52478]
15/07/23 13:26:40 INFO Utils: Successfully started service 'sparkDriver' on port 52478.
15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity 1916.2 MB
15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
15/07/23 13:26:40 INFO AbstractConnector: Started SocketConnector@0.0.0.0:56499
15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file server' on port 56499.
15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
15/07/23 13:26:40 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/07/23 13:26:40 INFO SparkUI: Started SparkUI at http://node1-scads02:4040
15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150723132640-0000
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078 (node3-scads06:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078 (node2-scads05:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/0 is now RUNNING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/1 is now RUNNING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/0 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/1 is now LOADING
15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on 45786
15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register BlockManager
15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block manager node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>, node1-scads02, 45786)
15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left on device)
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor app-20150723132640-0000/0 removed: java.io.IOException: No space left on device
15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078 (node3-scads06:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left on device)
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor app-20150723132640-0000/1 removed: java.io.IOException: No space left on device
15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 1
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added: app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078 (node2-scads05:7078) with 8 cores
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores, 512.0 MB RAM
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/2 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/3 is now LOADING
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/2 is now RUNNING
15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/3 is now RUNNING
15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated: app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left on device)
15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor app-20150723132640-0000/2 removed: java.io.IOException: No space left on device
15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2...

Re: Asked to remove non-existent executor exception

Posted by Ted Yu <yu...@gmail.com>.
You can list the files in tmpfs in reverse chronological order and remove
the oldest until you have enough space.

Cheers

On Sun, Jul 26, 2015 at 12:43 AM, Pa Rö <pa...@googlemail.com>
wrote:

> i has seen that the "tempfs" is full, how i can clear this?
>
> 2015-07-23 13:41 GMT+02:00 Pa Rö <pa...@googlemail.com>:
>
>> hello spark community,
>>
>> i have build an application with geomesa, accumulo and spark.
>> if it run on spark local mode, it is working, but on spark
>> cluster not. in short it says: No space left on device. Asked to remove
>> non-existent executor XY.
>> I´m confused, because there were many GB´s of free space. do i need to
>> change my configuration or what else can i do? thanks in advance.
>>
>> here is the complete exception:
>>
>> og4j:WARN No appenders could be found for logger
>> (org.apache.accumulo.fate.zookeeper.ZooSession).
>> log4j:WARN Please initialize the log4j system properly.
>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
>> more info.
>> Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>> 15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
>> 15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop
>> library for your platform... using builtin-java classes where applicable
>> 15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
>> 15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
>> 15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication
>> disabled; ui acls disabled; users with view permissions: Set(marcel); users
>> with modify permissions: Set(marcel)
>> 15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
>> 15/07/23 13:26:40 INFO Remoting: Starting remoting
>> 15/07/23 13:26:40 INFO Remoting: Remoting started; listening on addresses
>> :[akka.tcp://sparkDriver@node1-scads02:52478]
>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'sparkDriver'
>> on port 52478.
>> 15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
>> 15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
>> 15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at
>> /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
>> 15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity
>> 1916.2 MB
>> 15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is
>> /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
>> 15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>> SocketConnector@0.0.0.0:56499
>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file
>> server' on port 56499.
>> 15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
>> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
>> 15/07/23 13:26:40 INFO AbstractConnector: Started
>> SelectChannelConnector@0.0.0.0:4040
>> 15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on
>> port 4040.
>> 15/07/23 13:26:40 INFO SparkUI: Started SparkUI at
>> http://node1-scads02:4040
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master
>> akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark
>> cluster with app ID app-20150723132640-0000
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>> app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078
>> (node3-scads06:7078) with 8 cores
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>> app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores,
>> 512.0 MB RAM
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>> app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078
>> (node2-scads05:7078) with 8 cores
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>> app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores,
>> 512.0 MB RAM
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/0 is now RUNNING
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/1 is now RUNNING
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/0 is now LOADING
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/1 is now LOADING
>> 15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on 45786
>> 15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register BlockManager
>> 15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block manager
>> node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>,
>> node1-scads02, 45786)
>> 15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left
>> on device)
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>> app-20150723132640-0000/0 removed: java.io.IOException: No space left on
>> device
>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>> non-existent executor 0
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>> app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078
>> (node3-scads06:7078) with 8 cores
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>> app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores,
>> 512.0 MB RAM
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left
>> on device)
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
>> app-20150723132640-0000/1 removed: java.io.IOException: No space left on
>> device
>> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
>> non-existent executor 1
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
>> app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078
>> (node2-scads05:7078) with 8 cores
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
>> app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores,
>> 512.0 MB RAM
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/2 is now LOADING
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/3 is now LOADING
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/2 is now RUNNING
>> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is
>> ready for scheduling beginning after reached minRegisteredResourcesRatio:
>> 0.0
>> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/3 is now RUNNING
>> 15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated:
>> app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left
>> on device)
>> 15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor
>> app-20150723132640-0000/2 removed: java.io.IOException: No space left on
>> device
>> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
>> non-existent executor 2...
>>
>
>

Re: Asked to remove non-existent executor exception

Posted by Pa Rö <pa...@googlemail.com>.
 i has seen that the "tempfs" is full, how i can clear this?

2015-07-23 13:41 GMT+02:00 Pa Rö <pa...@googlemail.com>:

> hello spark community,
>
> i have build an application with geomesa, accumulo and spark.
> if it run on spark local mode, it is working, but on spark
> cluster not. in short it says: No space left on device. Asked to remove
> non-existent executor XY.
> I´m confused, because there were many GB´s of free space. do i need to
> change my configuration or what else can i do? thanks in advance.
>
> here is the complete exception:
>
> og4j:WARN No appenders could be found for logger
> (org.apache.accumulo.fate.zookeeper.ZooSession).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
> Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties
> 15/07/23 13:26:39 INFO SparkContext: Running Spark version 1.3.0
> 15/07/23 13:26:39 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 15/07/23 13:26:39 INFO SecurityManager: Changing view acls to: marcel
> 15/07/23 13:26:39 INFO SecurityManager: Changing modify acls to: marcel
> 15/07/23 13:26:39 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(marcel); users
> with modify permissions: Set(marcel)
> 15/07/23 13:26:39 INFO Slf4jLogger: Slf4jLogger started
> 15/07/23 13:26:40 INFO Remoting: Starting remoting
> 15/07/23 13:26:40 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkDriver@node1-scads02:52478]
> 15/07/23 13:26:40 INFO Utils: Successfully started service 'sparkDriver'
> on port 52478.
> 15/07/23 13:26:40 INFO SparkEnv: Registering MapOutputTracker
> 15/07/23 13:26:40 INFO SparkEnv: Registering BlockManagerMaster
> 15/07/23 13:26:40 INFO DiskBlockManager: Created local directory at
> /tmp/spark-ca9319d4-68a2-4add-a21a-48b13ae9cf81/blockmgr-cbf8af23-e113-4732-8c2c-7413ad237b3b
> 15/07/23 13:26:40 INFO MemoryStore: MemoryStore started with capacity
> 1916.2 MB
> 15/07/23 13:26:40 INFO HttpFileServer: HTTP File server directory is
> /tmp/spark-9d4a04d5-3535-49e0-a859-d278a0cc7bf8/httpd-1882aafc-45fe-4490-803d-c04fc67510a2
> 15/07/23 13:26:40 INFO HttpServer: Starting HTTP Server
> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
> 15/07/23 13:26:40 INFO AbstractConnector: Started
> SocketConnector@0.0.0.0:56499
> 15/07/23 13:26:40 INFO Utils: Successfully started service 'HTTP file
> server' on port 56499.
> 15/07/23 13:26:40 INFO SparkEnv: Registering OutputCommitCoordinator
> 15/07/23 13:26:40 INFO Server: jetty-8.y.z-SNAPSHOT
> 15/07/23 13:26:40 INFO AbstractConnector: Started
> SelectChannelConnector@0.0.0.0:4040
> 15/07/23 13:26:40 INFO Utils: Successfully started service 'SparkUI' on
> port 4040.
> 15/07/23 13:26:40 INFO SparkUI: Started SparkUI at
> http://node1-scads02:4040
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Connecting to master
> akka.tcp://sparkMaster@node1-scads02:7077/user/Master...
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Connected to Spark
> cluster with app ID app-20150723132640-0000
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
> app-20150723132640-0000/0 on worker-20150723132524-node3-scads06-7078
> (node3-scads06:7078) with 8 cores
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
> app-20150723132640-0000/0 on hostPort node3-scads06:7078 with 8 cores,
> 512.0 MB RAM
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
> app-20150723132640-0000/1 on worker-20150723132513-node2-scads05-7078
> (node2-scads05:7078) with 8 cores
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
> app-20150723132640-0000/1 on hostPort node2-scads05:7078 with 8 cores,
> 512.0 MB RAM
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/0 is now RUNNING
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/1 is now RUNNING
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/0 is now LOADING
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/1 is now LOADING
> 15/07/23 13:26:40 INFO NettyBlockTransferService: Server created on 45786
> 15/07/23 13:26:40 INFO BlockManagerMaster: Trying to register BlockManager
> 15/07/23 13:26:40 INFO BlockManagerMasterActor: Registering block manager
> node1-scads02:45786 with 1916.2 MB RAM, BlockManagerId(<driver>,
> node1-scads02, 45786)
> 15/07/23 13:26:40 INFO BlockManagerMaster: Registered BlockManager
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/0 is now FAILED (java.io.IOException: No space left
> on device)
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
> app-20150723132640-0000/0 removed: java.io.IOException: No space left on
> device
> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
> non-existent executor 0
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
> app-20150723132640-0000/2 on worker-20150723132524-node3-scads06-7078
> (node3-scads06:7078) with 8 cores
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
> app-20150723132640-0000/2 on hostPort node3-scads06:7078 with 8 cores,
> 512.0 MB RAM
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/1 is now FAILED (java.io.IOException: No space left
> on device)
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Executor
> app-20150723132640-0000/1 removed: java.io.IOException: No space left on
> device
> 15/07/23 13:26:40 ERROR SparkDeploySchedulerBackend: Asked to remove
> non-existent executor 1
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor added:
> app-20150723132640-0000/3 on worker-20150723132513-node2-scads05-7078
> (node2-scads05:7078) with 8 cores
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: Granted executor ID
> app-20150723132640-0000/3 on hostPort node2-scads05:7078 with 8 cores,
> 512.0 MB RAM
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/2 is now LOADING
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/3 is now LOADING
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/2 is now RUNNING
> 15/07/23 13:26:40 INFO SparkDeploySchedulerBackend: SchedulerBackend is
> ready for scheduling beginning after reached minRegisteredResourcesRatio:
> 0.0
> 15/07/23 13:26:40 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/3 is now RUNNING
> 15/07/23 13:26:41 INFO AppClient$ClientActor: Executor updated:
> app-20150723132640-0000/2 is now FAILED (java.io.IOException: No space left
> on device)
> 15/07/23 13:26:41 INFO SparkDeploySchedulerBackend: Executor
> app-20150723132640-0000/2 removed: java.io.IOException: No space left on
> device
> 15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
> non-existent executor 2...
>