You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mesos.apache.org by co...@gmail.com on 2015/11/08 13:20:49 UTC

Re: Job Constraints from Marathon or Spark

Hi Rodrick,

For marathon, the constraint lists should be either 2 (for unary postfix ops) or 3 (for binary infix ops).  Did you try specifying your two constraints like this?

"constraints": [
  [
    "rack",
    "LIKE",
    "2"
  ],
  [
    "rack",
    "UNIQUE"
  ]
]

If this solves your problem then Marathon could have saved you some trouble by rejecting the invalid constraint configuration.  Could you file a bug on the GitHub issues list?

--
Connor

> On Nov 7, 2015, at 15:13, Rodrick Brown <ro...@orchard-app.com> wrote:
> 
> I have a few dozen marathon and spark jobs I would like to use constraints with across my slaves but I can never get this to work at all using the latest release 0.25.1 
> 
> I have the following set on a few of my slaves. 
> 
> slaves[1-5]
> $ cat /etc/mesos-slave/attributes
> 'rack:1;zone:us-west-2c;owner:spark'
> 
> slaves[6-10]
> $ cat /etc/mesos-slave/attributes
> ‘rack:2;zone:us-west-2a;owner:microservices'
> 
> I have the following marathon job definition no matter what I do I can never get constraints to work other than the basic ones like hostname:UNIQUE or hostname:CLUSTER:codename
> What could I be doing wrong? The job tries to deploy but stays in waiting mode forever. 
> 
> {
>     "id": “mu-xxxx-service",
>     "cmd": "env && /opt/orchard/xxxx-xxxx-server/bin/run_jar.sh",
>     "cpus": 1.0,
>     "mem": 4096,
>     "disk": 100,
>     "instances": 2,
> 		"constraints": [
>       [
>         "rack",
>         "LIKE",
>         “2",
>         "UNIQUE"
>       ]
>     ],
>     "maxLaunchDelaySeconds": 1,
>     "backoffFactor": 1.20,
>     "healthChecks": [
>       {
>         "gracePeriodSeconds": 3,
>         "intervalSeconds": 10,
>         "maxConsecutiveFailures": 3,
>         "portIndex": 0,
>         "protocol": "TCP",
>         "timeoutSeconds": 5
>       }
>     ],
>     "ports": [
>        0,
>        0
>     ],
> 	"upgradeStrategy": {
>         "minimumHealthCapacity": 0.5,
>         "maximumOverCapacity": 0.5
>     }
> }
> 
> I’ve also tried running spark jobs like this
> 
> timeout 3600 /opt/spark-1.4.1-bin-hadoop2.4/bin/spark-submit --conf spark.mesos.constraints="rack:1" 
> 
> Jobs still get executed on all slaves. 
> 
> 
> 
> Rodrick Brown / DevOPs Engineer 
> +1 917 445 6839 / rodrick@orchardplatform.com
> 
> Orchard Platform 
> 101 5th Avenue, 4th Floor, New York, NY 10003 
> http://www.orchardplatform.com
> 
> Orchard Blog | Marketplace Lending Meetup
> 
> 
> 
> NOTICE TO RECIPIENTS: This communication is confidential and intended for the use of the addressee only. If you are not an intended recipient of this communication, please delete it immediately and notify the sender by return email. Unauthorized reading, dissemination, distribution or copying of this communication is prohibited. This communication does not constitute an offer to sell or a solicitation of an indication of interest to purchase any loan, security or any other financial product or instrument, nor is it an offer to sell or a solicitation of an indication of interest to purchase any products or services to any persons who are prohibited from receiving such information under applicable law. The contents of this communication may not be accurate or complete and are subject to change without notice. As such, Orchard App, Inc. (including its subsidiaries and affiliates, "Orchard") makes no representation regarding the accuracy or completeness of the information contained herein. The intended recipient is advised to consult its own professional advisors, including those specializing in legal, tax and accounting matters. Orchard does not provide legal, tax or accounting advice.