You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mesos.apache.org by "Guillermo Rodriguez (JIRA)" <ji...@apache.org> on 2016/03/03 06:16:18 UTC

[jira] [Commented] (MESOS-4842) Sudden framework crash may bring down mesos master

    [ https://issues.apache.org/jira/browse/MESOS-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15177194#comment-15177194 ] 

Guillermo Rodriguez commented on MESOS-4842:
--------------------------------------------

Ok ignore the last comment in the issue. It can crash even if there are no tasks running at all. This is a crash in a cluster with nothing else running.
{code}
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: I0303 04:53:19.519271    12 master.cpp:5202] Framework failover timeout, removing framework e5b05b4b-56f6-4c55-9171-2f3568d9a277-0006 (swarm) at scheduler(1)@172.31.36.62:3375
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: I0303 04:53:19.519290    12 master.cpp:5933] Removing framework e5b05b4b-56f6-4c55-9171-2f3568d9a277-0006 (swarm) at scheduler(1)@172.31.36.62:3375
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: F0303 04:53:19.519568    12 sorter.cpp:251] Check failed: total_.resources.contains(slaveId)
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: *** Check failure stack trace: ***
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201a93d  google::LogMessage::Fail()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201c76d  google::LogMessage::SendToLog()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201a52c  google::LogMessage::Flush()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201d069  google::LogMessageFatal::~LogMessageFatal()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889193973b  mesos::internal::master::allocator::DRFSorter::remove()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889192823e  mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::removeFramework()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8891fc7681  process::ProcessManager::resume()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8891fc7987  _ZNSt6thread5_ImplISt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS6_EEEvEEE6_M_runEv
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8890b10a60  (unknown)
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889032d182  start_thread
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889005a47d  (unknown)
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: *** Aborted at 1456980799 (unix time) try "date -d @1456980799" if you are using GNU date ***
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: PC: @     0x7f888ff9a227 (unknown)
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: *** SIGSEGV (@0x0) received by PID 1 (TID 0x7f8887281700) from PID 0; stack trace: ***
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8890335340 (unknown)
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f888ff9a227 (unknown)
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8892022be9 google::DumpStackTraceAndExit()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201a93d google::LogMessage::Fail()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201c76d google::LogMessage::SendToLog()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201a52c google::LogMessage::Flush()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889201d069 google::LogMessageFatal::~LogMessageFatal()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889193973b mesos::internal::master::allocator::DRFSorter::remove()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889192823e mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::removeFramework()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8891fc7681 process::ProcessManager::resume()
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8891fc7987 _ZNSt6thread5_ImplISt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS6_EEEvEEE6_M_runEv
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f8890b10a60 (unknown)
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889032d182 start_thread
Mar 03 15:53:19 ip-172-31-45-158.aws.cmcrc.com docker[12703]: @     0x7f889005a47d (unknown)
Mar 03 15:53:30 ip-172-31-45-158.aws.cmcrc.com docker[13660]: mesos-master
{code}


> Sudden framework crash may bring down mesos master
> --------------------------------------------------
>
>                 Key: MESOS-4842
>                 URL: https://issues.apache.org/jira/browse/MESOS-4842
>             Project: Mesos
>          Issue Type: Bug
>          Components: framework
>    Affects Versions: 0.27.1
>            Reporter: Guillermo Rodriguez
>
> Using:
> swarm 1.1.3-rc1
> CoreOS 899.9
> Mesos 0.27.1
> Marathon 0.15.3
> When swarm is stopped/restarted it may crash the mesos-master. It doesn't happens always but frequently enough to be a pain.
> If for some reason the swarm service fails, marathon will try and restart the service. When this happens mesos may crash. I would say it happens around 50% of the times.
> This looks like a swarm/mesos problem so I will report the same error to both lists.
> This is the final lines of mesos logs:
> {code}
> 0303 04:32:45.327628 8 master.cpp:5202] Framework failover timeout, removing framework b4149972-942d-46bf-b886-644cd3d0c6f0-0004 (swarm) at scheduler(1)@172.31.39.68:3375
> I0303 04:32:45.327651 8 master.cpp:5933] Removing framework b4149972-942d-46bf-b886-644cd3d0c6f0-0004 (swarm) at scheduler(1)@172.31.39.68:3375
> I0303 04:32:45.327847 8 master.cpp:6445] Updating the state of task trth_download-SES.1fba45934ce4 of framework b4149972-942d-46bf-b886-644cd3d0c6f0-0004 (latest state: TASK_FAILED, status update state: TASK_KILLED)
> I0303 04:32:45.327879 8 master.cpp:6511] Removing task trth_download-SES.1fba45934ce4 with resources cpus(*):0.3; mem(*):450 of framework b4149972-942d-46bf-b886-644cd3d0c6f0-0004 on slave 2ce3e7f3-1712-4a4b-8338-04077c371a67-S7 at slave(1)@172.31.33.80:5051 (172.31.33.80)
> F0303 04:32:45.328032 8 sorter.cpp:251] Check failed: total_.resources.contains(slaveId)
> *** Check failure stack trace: ***
> E0303 04:32:45.328198 13 process.cpp:1966] Failed to shutdown socket with fd 53: Transport endpoint is not connected
>  @ 0x7fc22173893d google::LogMessage::Fail()
>  @ 0x7fc22173a76d google::LogMessage::SendToLog()
>  @ 0x7fc22173852c google::LogMessage::Flush()
>  @ 0x7fc22173b069 google::LogMessageFatal::~LogMessageFatal()
>  @ 0x7fc22105773b mesos::internal::master::allocator::DRFSorter::remove()
>  @ 0x7fc22104623e mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::removeFramework()
>  @ 0x7fc2216e5681 process::ProcessManager::resume()
>  @ 0x7fc2216e5987 _ZNSt6thread5_ImplISt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS6_EEEvEEE6_M_runEv
>  @ 0x7fc22022ea60 (unknown)
>  @ 0x7fc21fa4b182 start_thread
>  @ 0x7fc21f77847d (unknown)
> *** Aborted at 1456979565 (unix time) try "date -d @1456979565" if you are using GNU date ***
> PC: @ 0x7fc21f6b8227 (unknown)
> *** SIGSEGV (@0x0) received by PID 1 (TID 0x7fc2181a2700) from PID 0; stack trace: ***
>  @ 0x7fc21fa53340 (unknown)
>  @ 0x7fc21f6b8227 (unknown)
>  @ 0x7fc221740be9 google::DumpStackTraceAndExit()
>  @ 0x7fc22173893d google::LogMessage::Fail()
>  @ 0x7fc22173a76d google::LogMessage::SendToLog()
>  @ 0x7fc22173852c google::LogMessage::Flush()
>  @ 0x7fc22173b069 google::LogMessageFatal::~LogMessageFatal()
>  @ 0x7fc22105773b mesos::internal::master::allocator::DRFSorter::remove()
>  @ 0x7fc22104623e mesos::internal::master::allocator::internal::HierarchicalAllocatorProcess::removeFramework()
>  @ 0x7fc2216e5681 process::ProcessManager::resume()
>  @ 0x7fc2216e5987 _ZNSt6thread5_ImplISt12_Bind_simpleIFSt5_BindIFZN7process14ProcessManager12init_threadsEvEUlRKSt11atomic_boolE_St17reference_wrapperIS6_EEEvEEE6_M_runEv
>  @ 0x7fc22022ea60 (unknown)
>  @ 0x7fc21fa4b182 start_thread
>  @ 0x7fc21f77847d (unknown)
> {code}
> If you ask me, looks like swarm is terminated and connections are lost, but just at that moment there is a tasks that was running on swarm that was finishing. Then mesos tries to inform the already deceased framework that the task is finished and the resources need to be recovered but the framework is no longer there... so it crashes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)