You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by mengxr <gi...@git.apache.org> on 2014/05/07 01:57:49 UTC

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

GitHub user mengxr opened a pull request:

    https://github.com/apache/spark/pull/672

    [SPARK-1743][MLLIB] add loadLibSVMFile and saveAsLibSVMFile to pyspark

    Make loading/saving labeled data easier for pyspark users.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/mengxr/spark pyspark-mllib-util

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/672.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #672
    
----
commit d61668d18188b6b88a642dd741748ad7f3616045
Author: Xiangrui Meng <me...@databricks.com>
Date:   2014-05-06T23:54:28Z

    add loadLibSVMFile and saveAsLibSVMFile to pyspark

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/672


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42380728
  
    Merged build finished. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42392900
  
    Jenkins, retest this please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42392939
  
     Merged build triggered. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42383791
  
    
    Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14750/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42383547
  
    Merged build started. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42374442
  
    Merged build started. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42394930
  
    Merged build finished. All automated tests passed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42374427
  
     Merged build triggered. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42394931
  
    All automated tests passed.
    Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14760/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42377059
  
    All automated tests passed.
    Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14742/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/672#discussion_r12357908
  
    --- Diff: python/pyspark/mllib/util.py ---
    @@ -0,0 +1,168 @@
    +#
    +# Licensed to the Apache Software Foundation (ASF) under one or more
    +# contributor license agreements.  See the NOTICE file distributed with
    +# this work for additional information regarding copyright ownership.
    +# The ASF licenses this file to You under the Apache License, Version 2.0
    +# (the "License"); you may not use this file except in compliance with
    +# the License.  You may obtain a copy of the License at
    +#
    +#    http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    +#
    +
    +import numpy as np
    +
    +from pyspark.mllib.linalg import Vectors, SparseVector
    +from pyspark.mllib.regression import LabeledPoint
    +from pyspark.mllib._common import _convert_vector
    +
    +class MLUtils:
    +    """
    +    Helper methods to load, save and pre-process data used in ML Lib.
    +    """
    +
    +    @staticmethod
    +    def _parse_libsvm_line(line, multiclass):
    +        """Parses a line in LIBSVM format into (label, indices, values)."""
    +        items = line.split(None)
    +        label = float(items[0])
    +        if not multiclass:
    +            label = 1.0 if label > 0.5 else 0.0
    +        nnz = len(items) - 1
    +        indices = np.zeros(nnz, dtype=np.int32)
    +        values = np.zeros(nnz)
    +        for i in xrange(nnz):
    +            index, value = items[1 + i].split(":")
    +            indices[i] = int(index) - 1
    +            values[i] = float(value)
    +        return label, indices, values
    +
    +
    +    @staticmethod
    +    def _convert_labeled_point_to_libsvm(p):
    +        """Converts a LabeledPoint to a string in LIBSVM format."""
    +        items = [str(p.label)]
    +        v = _convert_vector(p.features)
    +        if type(v) == np.ndarray:
    +            for i in xrange(len(v)):
    +                items.append(str(i + 1) + ":" + str(v[i]))
    +        elif type(v) == SparseVector:
    +            nnz = len(v.indices)
    +            for i in xrange(nnz):
    +                items.append(str(v.indices[i] + 1) + ":" + str(v.values[i]))
    +        else:
    +            raise TypeError("_convert_labeled_point_to_libsvm needs either ndarray or SparseVector"
    +                            " but got " % type(v))
    +        return " ".join(items)
    +
    +
    +    @staticmethod
    +    def loadLibSVMFile(sc, path, multiclass=False, numFeatures=-1, minPartitions=None):
    +        """
    +        Loads labeled data in the LIBSVM format into an RDD[LabeledPoint].
    +        The LIBSVM format is a text-based format used by LIBSVM and LIBLINEAR.
    +        Each line represents a labeled sparse feature vector using the following format:
    +
    +        label index1:value1 index2:value2 ...
    +
    +        where the indices are one-based and in ascending order.
    +        This method parses each line into a [[org.apache.spark.mllib.regression.LabeledPoint]],
    +        where the feature indices are converted to zero-based.
    +
    +        :param sc: Spark context
    +        :param path: file or directory path in any Hadoop-supported file system URI
    +        :param multiclass: whether the input labels contain more than two classes. If false, any
    +                           label with value greater than 0.5 will be mapped to 1.0, or 0.0
    +                           otherwise. So it works for both +1/-1 and 1/0 cases. If true, the double
    +                           value parsed directly from the label string will be used as the label
    +                           value.
    +        :param numFeatures: number of features, which will be determined from the input data if a
    +                            nonpositive value is given. This is useful when the dataset is already
    +                            split into multiple files and you want to load them separately, because
    +                            some features may not present in certain files, which leads to
    +                            inconsistent feature dimensions.
    +        :param minPartitions: min number of partitions
    +        :return: labeled data stored as an RDD[LabeledPoint]
    --- End diff --
    
    Epydoc doesn't work on my Mac. I will try to follow the syntax in conf.py.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/672#discussion_r12358071
  
    --- Diff: python/pyspark/mllib/util.py ---
    @@ -0,0 +1,168 @@
    +#
    +# Licensed to the Apache Software Foundation (ASF) under one or more
    +# contributor license agreements.  See the NOTICE file distributed with
    +# this work for additional information regarding copyright ownership.
    +# The ASF licenses this file to You under the Apache License, Version 2.0
    +# (the "License"); you may not use this file except in compliance with
    +# the License.  You may obtain a copy of the License at
    +#
    +#    http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    +#
    +
    +import numpy as np
    +
    +from pyspark.mllib.linalg import Vectors, SparseVector
    +from pyspark.mllib.regression import LabeledPoint
    +from pyspark.mllib._common import _convert_vector
    +
    +class MLUtils:
    +    """
    +    Helper methods to load, save and pre-process data used in ML Lib.
    +    """
    +
    +    @staticmethod
    +    def _parse_libsvm_line(line, multiclass):
    +        """Parses a line in LIBSVM format into (label, indices, values)."""
    +        items = line.split(None)
    +        label = float(items[0])
    +        if not multiclass:
    +            label = 1.0 if label > 0.5 else 0.0
    +        nnz = len(items) - 1
    +        indices = np.zeros(nnz, dtype=np.int32)
    +        values = np.zeros(nnz)
    +        for i in xrange(nnz):
    +            index, value = items[1 + i].split(":")
    +            indices[i] = int(index) - 1
    +            values[i] = float(value)
    +        return label, indices, values
    +
    +
    +    @staticmethod
    +    def _convert_labeled_point_to_libsvm(p):
    +        """Converts a LabeledPoint to a string in LIBSVM format."""
    +        items = [str(p.label)]
    +        v = _convert_vector(p.features)
    +        if type(v) == np.ndarray:
    +            for i in xrange(len(v)):
    +                items.append(str(i + 1) + ":" + str(v[i]))
    +        elif type(v) == SparseVector:
    +            nnz = len(v.indices)
    +            for i in xrange(nnz):
    +                items.append(str(v.indices[i] + 1) + ":" + str(v.values[i]))
    +        else:
    +            raise TypeError("_convert_labeled_point_to_libsvm needs either ndarray or SparseVector"
    +                            " but got " % type(v))
    --- End diff --
    
    It is safe to leave this block in case someone updates `_convert_vector` in the future.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42383789
  
    Merged build finished. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by mateiz <gi...@git.apache.org>.
Github user mateiz commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42383338
  
    Jenkins, retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42380489
  
     Merged build triggered. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by mateiz <gi...@git.apache.org>.
Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/672#discussion_r12357905
  
    --- Diff: python/pyspark/mllib/util.py ---
    @@ -0,0 +1,168 @@
    +#
    +# Licensed to the Apache Software Foundation (ASF) under one or more
    +# contributor license agreements.  See the NOTICE file distributed with
    +# this work for additional information regarding copyright ownership.
    +# The ASF licenses this file to You under the Apache License, Version 2.0
    +# (the "License"); you may not use this file except in compliance with
    +# the License.  You may obtain a copy of the License at
    +#
    +#    http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    +#
    +
    +import numpy as np
    +
    +from pyspark.mllib.linalg import Vectors, SparseVector
    +from pyspark.mllib.regression import LabeledPoint
    +from pyspark.mllib._common import _convert_vector
    +
    +class MLUtils:
    +    """
    +    Helper methods to load, save and pre-process data used in ML Lib.
    +    """
    +
    +    @staticmethod
    +    def _parse_libsvm_line(line, multiclass):
    +        """Parses a line in LIBSVM format into (label, indices, values)."""
    +        items = line.split(None)
    +        label = float(items[0])
    +        if not multiclass:
    +            label = 1.0 if label > 0.5 else 0.0
    +        nnz = len(items) - 1
    +        indices = np.zeros(nnz, dtype=np.int32)
    +        values = np.zeros(nnz)
    +        for i in xrange(nnz):
    +            index, value = items[1 + i].split(":")
    +            indices[i] = int(index) - 1
    +            values[i] = float(value)
    +        return label, indices, values
    +
    +
    +    @staticmethod
    +    def _convert_labeled_point_to_libsvm(p):
    +        """Converts a LabeledPoint to a string in LIBSVM format."""
    +        items = [str(p.label)]
    +        v = _convert_vector(p.features)
    +        if type(v) == np.ndarray:
    +            for i in xrange(len(v)):
    +                items.append(str(i + 1) + ":" + str(v[i]))
    +        elif type(v) == SparseVector:
    +            nnz = len(v.indices)
    +            for i in xrange(nnz):
    +                items.append(str(v.indices[i] + 1) + ":" + str(v.values[i]))
    +        else:
    +            raise TypeError("_convert_labeled_point_to_libsvm needs either ndarray or SparseVector"
    +                            " but got " % type(v))
    +        return " ".join(items)
    +
    +
    +    @staticmethod
    +    def loadLibSVMFile(sc, path, multiclass=False, numFeatures=-1, minPartitions=None):
    +        """
    +        Loads labeled data in the LIBSVM format into an RDD[LabeledPoint].
    +        The LIBSVM format is a text-based format used by LIBSVM and LIBLINEAR.
    +        Each line represents a labeled sparse feature vector using the following format:
    +
    +        label index1:value1 index2:value2 ...
    +
    +        where the indices are one-based and in ascending order.
    +        This method parses each line into a [[org.apache.spark.mllib.regression.LabeledPoint]],
    +        where the feature indices are converted to zero-based.
    +
    +        :param sc: Spark context
    +        :param path: file or directory path in any Hadoop-supported file system URI
    +        :param multiclass: whether the input labels contain more than two classes. If false, any
    +                           label with value greater than 0.5 will be mapped to 1.0, or 0.0
    +                           otherwise. So it works for both +1/-1 and 1/0 cases. If true, the double
    +                           value parsed directly from the label string will be used as the label
    +                           value.
    +        :param numFeatures: number of features, which will be determined from the input data if a
    +                            nonpositive value is given. This is useful when the dataset is already
    +                            split into multiple files and you want to load them separately, because
    +                            some features may not present in certain files, which leads to
    +                            inconsistent feature dimensions.
    --- End diff --
    
    Python convention is that doc comments must be at most 72 characters wide, because they need to be displayed, possibly indented, in peoples' terminals. Please make these shorter and change the indent of lines below to match other files (e.g. conf.py, mllib/linalg.py). You can check the result in the built docs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by mateiz <gi...@git.apache.org>.
Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/672#discussion_r12357955
  
    --- Diff: python/pyspark/mllib/util.py ---
    @@ -0,0 +1,168 @@
    +#
    +# Licensed to the Apache Software Foundation (ASF) under one or more
    +# contributor license agreements.  See the NOTICE file distributed with
    +# this work for additional information regarding copyright ownership.
    +# The ASF licenses this file to You under the Apache License, Version 2.0
    +# (the "License"); you may not use this file except in compliance with
    +# the License.  You may obtain a copy of the License at
    +#
    +#    http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    +#
    +
    +import numpy as np
    +
    +from pyspark.mllib.linalg import Vectors, SparseVector
    +from pyspark.mllib.regression import LabeledPoint
    +from pyspark.mllib._common import _convert_vector
    +
    +class MLUtils:
    +    """
    +    Helper methods to load, save and pre-process data used in ML Lib.
    +    """
    +
    +    @staticmethod
    +    def _parse_libsvm_line(line, multiclass):
    +        """Parses a line in LIBSVM format into (label, indices, values)."""
    +        items = line.split(None)
    +        label = float(items[0])
    +        if not multiclass:
    +            label = 1.0 if label > 0.5 else 0.0
    +        nnz = len(items) - 1
    +        indices = np.zeros(nnz, dtype=np.int32)
    +        values = np.zeros(nnz)
    +        for i in xrange(nnz):
    +            index, value = items[1 + i].split(":")
    +            indices[i] = int(index) - 1
    +            values[i] = float(value)
    +        return label, indices, values
    +
    +
    +    @staticmethod
    +    def _convert_labeled_point_to_libsvm(p):
    +        """Converts a LabeledPoint to a string in LIBSVM format."""
    +        items = [str(p.label)]
    +        v = _convert_vector(p.features)
    +        if type(v) == np.ndarray:
    +            for i in xrange(len(v)):
    +                items.append(str(i + 1) + ":" + str(v[i]))
    +        elif type(v) == SparseVector:
    +            nnz = len(v.indices)
    +            for i in xrange(nnz):
    +                items.append(str(v.indices[i] + 1) + ":" + str(v.values[i]))
    +        else:
    +            raise TypeError("_convert_labeled_point_to_libsvm needs either ndarray or SparseVector"
    +                            " but got " % type(v))
    --- End diff --
    
    I don't think this can happen here, `_convert_vector` will complain instead


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42383541
  
     Merged build triggered. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42380730
  
    
    Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14749/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42377056
  
    Merged build finished. All automated tests passed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42392946
  
    Merged build started. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by mateiz <gi...@git.apache.org>.
Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/672#discussion_r12357850
  
    --- Diff: python/pyspark/mllib/util.py ---
    @@ -0,0 +1,168 @@
    +#
    +# Licensed to the Apache Software Foundation (ASF) under one or more
    +# contributor license agreements.  See the NOTICE file distributed with
    +# this work for additional information regarding copyright ownership.
    +# The ASF licenses this file to You under the Apache License, Version 2.0
    +# (the "License"); you may not use this file except in compliance with
    +# the License.  You may obtain a copy of the License at
    +#
    +#    http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    +#
    +
    +import numpy as np
    +
    +from pyspark.mllib.linalg import Vectors, SparseVector
    +from pyspark.mllib.regression import LabeledPoint
    +from pyspark.mllib._common import _convert_vector
    +
    +class MLUtils:
    +    """
    +    Helper methods to load, save and pre-process data used in ML Lib.
    +    """
    +
    +    @staticmethod
    +    def _parse_libsvm_line(line, multiclass):
    +        """Parses a line in LIBSVM format into (label, indices, values)."""
    +        items = line.split(None)
    +        label = float(items[0])
    +        if not multiclass:
    +            label = 1.0 if label > 0.5 else 0.0
    +        nnz = len(items) - 1
    +        indices = np.zeros(nnz, dtype=np.int32)
    +        values = np.zeros(nnz)
    +        for i in xrange(nnz):
    +            index, value = items[1 + i].split(":")
    +            indices[i] = int(index) - 1
    +            values[i] = float(value)
    +        return label, indices, values
    +
    +
    +    @staticmethod
    +    def _convert_labeled_point_to_libsvm(p):
    +        """Converts a LabeledPoint to a string in LIBSVM format."""
    +        items = [str(p.label)]
    +        v = _convert_vector(p.features)
    +        if type(v) == np.ndarray:
    +            for i in xrange(len(v)):
    +                items.append(str(i + 1) + ":" + str(v[i]))
    +        elif type(v) == SparseVector:
    +            nnz = len(v.indices)
    +            for i in xrange(nnz):
    +                items.append(str(v.indices[i] + 1) + ":" + str(v.values[i]))
    +        else:
    +            raise TypeError("_convert_labeled_point_to_libsvm needs either ndarray or SparseVector"
    +                            " but got " % type(v))
    +        return " ".join(items)
    +
    +
    +    @staticmethod
    +    def loadLibSVMFile(sc, path, multiclass=False, numFeatures=-1, minPartitions=None):
    +        """
    +        Loads labeled data in the LIBSVM format into an RDD[LabeledPoint].
    +        The LIBSVM format is a text-based format used by LIBSVM and LIBLINEAR.
    +        Each line represents a labeled sparse feature vector using the following format:
    +
    +        label index1:value1 index2:value2 ...
    +
    +        where the indices are one-based and in ascending order.
    +        This method parses each line into a [[org.apache.spark.mllib.regression.LabeledPoint]],
    +        where the feature indices are converted to zero-based.
    +
    +        :param sc: Spark context
    +        :param path: file or directory path in any Hadoop-supported file system URI
    +        :param multiclass: whether the input labels contain more than two classes. If false, any
    +                           label with value greater than 0.5 will be mapped to 1.0, or 0.0
    +                           otherwise. So it works for both +1/-1 and 1/0 cases. If true, the double
    +                           value parsed directly from the label string will be used as the label
    +                           value.
    +        :param numFeatures: number of features, which will be determined from the input data if a
    +                            nonpositive value is given. This is useful when the dataset is already
    +                            split into multiple files and you want to load them separately, because
    +                            some features may not present in certain files, which leads to
    +                            inconsistent feature dimensions.
    +        :param minPartitions: min number of partitions
    +        :return: labeled data stored as an RDD[LabeledPoint]
    --- End diff --
    
    I believe you should use `@param` and `@return` for Epydoc.. check pyspark/conf.py for example. Or have you tried generating the docs with this and seen it work?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: [SPARK-1743][MLLIB] add loadLibSVMFile and sav...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/672#issuecomment-42380497
  
    Merged build started. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---