You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/05/01 14:17:28 UTC

[GitHub] [spark] tgravescs commented on a change in pull request #24406: [SPARK-27024] Executor interface for cluster managers to support GPU and other resources

tgravescs commented on a change in pull request #24406: [SPARK-27024] Executor interface for cluster managers to support GPU and other resources
URL: https://github.com/apache/spark/pull/24406#discussion_r280084740
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/ResourceInformation.scala
 ##########
 @@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark
+
+import org.apache.spark.annotation.Evolving
+
+/**
+ * Class to hold information about a type of Resource. A resource could be a GPU, FPGA, etc.
+ * The array of addresses are resource specific and its up to the user to interpret the address.
+ * The units and addresses could be empty if they doesn't apply to that resource.
+ *
+ * One example is GPUs, where the addresses would be the indices of the GPUs, the count would be the
+ * number of GPUs and the units would be an empty string.
+ *
+ * @param name the name of the resource
+ * @param units the units of the resources, can be an empty string if units don't apply
+ * @param count the number of resources available
+ * @param addresses an optional array of strings describing the addresses of the resource
+ */
+@Evolving
+case class ResourceInformation(
+    private val name: String,
+    private val units: String,
+    private val count: Long,
+    private val addresses: Array[String] = Array.empty) {
+
+  def getName(): String = name
 
 Review comment:
   Yes they are val so they are immutable, which is what I intended, if you leave the private off it creates the getter such as name(), I didn't want that, I wanted to be consistent in the naming of the getters to have getName(), get.... 
   You obviously have to be able to create one so the constructor shows the parameters but the user doesn't care about that as they aren't creating it, they are consuming from it.  
   You are right its a java convention, personally for certain things I prefer it.  Spark does support both Scala and Java  so you pick one and unless you wrap its going to be non-conventional for one of them.  
   The TaskContext has getters in it formatted this way, so does RDD and other places. The newer DataSet api's don't though.  I'll just remove them as I don't feel that strongly on it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org