You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Thomas Graves (JIRA)" <ji...@apache.org> on 2012/06/04 20:10:23 UTC
[jira] [Created] (MAPREDUCE-4311) Capacity scheduler.xml does not
accept decimal values for capacity and maximum-capacity settings
Thomas Graves created MAPREDUCE-4311:
----------------------------------------
Summary: Capacity scheduler.xml does not accept decimal values for capacity and maximum-capacity settings
Key: MAPREDUCE-4311
URL: https://issues.apache.org/jira/browse/MAPREDUCE-4311
Project: Hadoop Map/Reduce
Issue Type: Bug
Components: contrib/capacity-sched, mrv2
Affects Versions: 0.23.3
Reporter: Thomas Graves
if capacity scheduler capacity or max capacity set with decimal it errors:
- Error starting ResourceManager
java.lang.NumberFormatException: For input string: "10.5"
at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:458)
at java.lang.Integer.parseInt(Integer.java:499)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:713)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:147)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.<init>(LeafQueue.java:147)
at
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:297)
at
0.20 used to take decimal and this could be an issue on large clusters that would have queues with small allocations.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira