You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "WeichenXu123 (via GitHub)" <gi...@apache.org> on 2023/04/12 00:02:17 UTC

[GitHub] [spark] WeichenXu123 opened a new pull request, #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

WeichenXu123 opened a new pull request, #40748:
URL: https://github.com/apache/spark/pull/40748

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a faster review.
     7. If you want to add a new configuration, please read the guideline first for naming configurations in
        'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the guideline first in
        'core/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   <!--
   Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible.
   If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions for the consistent environment, and the instructions could accord to: https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163506549


##########
python/pyspark/mlv2/base.py:
##########
@@ -0,0 +1,426 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from abc import ABCMeta, abstractmethod
+
+import copy
+import threading
+
+from typing import (
+    Any,
+    Callable,
+    Generic,
+    Iterator,
+    List,
+    Optional,
+    Sequence,
+    Tuple,
+    TypeVar,
+    Union,
+    cast,
+    overload,
+    TYPE_CHECKING,
+)
+
+from pyspark import since
+from pyspark.ml.param import P
+from pyspark.ml.common import inherit_doc
+from pyspark.ml.param.shared import (
+    HasInputCol,
+    HasOutputCol,
+    HasLabelCol,
+    HasFeaturesCol,
+    HasPredictionCol,
+)
+from pyspark.sql.dataframe import DataFrame

Review Comment:
   Yeah .. I am thinking that we might need a parent class for both too .. 
   BTW, cannot call `is_remote()` when importing because `SPARK_REMOTE` is set when Spark session is created.
   
   If this is just being used for type hinting, I think just using `from pyspark.sql.dataframe import DataFrame` would be fine for now because they have the same API anyway.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] WeichenXu123 commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "WeichenXu123 (via GitHub)" <gi...@apache.org>.
WeichenXu123 commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163576856


##########
python/pyspark/mlv2/classification/logistic_regression.py:
##########
@@ -0,0 +1,190 @@
+import numpy as np
+import math
+from pyspark.mlv2.base import Estimator, Model
+
+from pyspark.sql import DataFrame
+
+from pyspark.ml.param import (
+    Param,
+    Params,
+    TypeConverters,
+)
+from pyspark.ml.torch.distributor import TorchDistributor
+from pyspark.mlv2.classification.base import _ProbabilisticClassifierParams
+from pyspark.ml.param.shared import (
+    HasRegParam,
+    HasElasticNetParam,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+)
+from pyspark.mlv2.common_params import (
+    HasUseGPU,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+)
+from pyspark.sql.functions import lit, count, countDistinct
+
+import torch
+import torch.nn as torch_nn
+import torch.nn.functional as torch_fn
+
+
+class _LogisticRegressionParams(
+    _ProbabilisticClassifierParams,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+):
+    """
+    Params for :py:class:`LogisticRegression` and :py:class:`LogisticRegressionModel`.
+
+    .. versionadded:: 3.0.0
+    """
+
+    def __init__(self, *args: Any):
+        super(_LogisticRegressionParams, self).__init__(*args)
+        self._setDefault(
+            maxIter=100,
+            tol=1e-6,
+            numTrainWorkers=1,
+            numBatchSize=32,
+            learning_rate=0.001,
+            momentum=0.9,
+        )
+
+
+class _Net(torch_nn.Module):
+    def __init__(self, num_features, num_labels, bias) -> None:
+        super(_Net, self).__init__()
+
+        if num_labels > 2:
+            self.is_multinomial = True
+            output_dim = num_labels
+        else:
+            self.is_multinomial = False
+            output_dim = 1
+
+        self.fc = torch_nn.Linear(num_features, output_dim, bias=bias)
+
+    def forward(self, x: Any) -> Any:
+        output = self.fc(x)
+        if not self.is_multinomial:
+            output = torch.sigmoid(output).squeeze()
+        return output
+
+
+def _train_worker_fn(
+    num_samples_per_worker,
+    num_features,
+    batch_size,
+    max_iter,
+    num_labels,
+    learning_rate,
+    momentum,
+    fit_intercept,
+):
+    from pyspark.ml.torch.distributor import get_spark_partition_data_loader
+    from torch.nn.parallel import DistributedDataParallel as DDP
+    import torch.distributed
+    import torch.optim as optim
+
+    torch.distributed.init_process_group("gloo")
+
+    ddp_model = DDP(_Net(
+        num_features=num_features,
+        num_labels=num_labels,
+        bias=fit_intercept
+    ))
+
+    if num_labels > 2:
+        loss_fn = torch_nn.CrossEntropyLoss()
+    else:
+        loss_fn = torch_nn.BCELoss()
+
+    optimizer = optim.SGD(ddp_model.parameters(), lr=learning_rate, momentum=momentum)
+    data_loader = get_spark_partition_data_loader(num_samples_per_worker, batch_size)
+    for i in range(max_iter):
+        ddp_model.train()
+        for x, target in data_loader:
+            optimizer.zero_grad()
+            output = ddp_model(x)
+            loss_fn(output, target).backward()
+            optimizer.step()
+
+        # TODO: early stopping
+        #  When each epoch ends, computes loss on validation dataset and compare
+        #  current epoch validation loss with last epoch validation loss, if
+        #  less than provided `tol`, stop training.
+
+    if torch.distributed.get_rank() == 0:
+        return ddp_model.module.state_dict()
+
+    return None
+
+
+class LogisticRegression(Estimator["LogisticRegressionModel"], _LogisticRegressionParams):
+
+    def _fit(self, dataset: DataFrame) -> "LogisticRegressionModel":
+
+        num_train_workers = self.getNumTrainWorkers()
+        batch_size = self.getBatchSize()
+
+        # Q: Shall we persist the shuffled dataset ?
+        # shuffling results are already cached
+        dataset = (
+            dataset
+            .select(self.getFeaturesCol(), self.getLabelCol())
+            .repartition(num_train_workers)
+            .persist()

Review Comment:
   @zhengruifeng 
   
   My thoughts:
   
   In spark sql side, can we support a new kind of spark dataframe reader, that loads a spark dataframe from saved parquets/delta files, but the loaded dataframe is roughly evenly distributed with specific partition number, so that we can avoid the "repartition" (shuffle) step



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] WeichenXu123 commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "WeichenXu123 (via GitHub)" <gi...@apache.org>.
WeichenXu123 commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163576026


##########
python/pyspark/mlv2/base.py:
##########
@@ -0,0 +1,426 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from abc import ABCMeta, abstractmethod
+
+import copy
+import threading
+
+from typing import (
+    Any,
+    Callable,
+    Generic,
+    Iterator,
+    List,
+    Optional,
+    Sequence,
+    Tuple,
+    TypeVar,
+    Union,
+    cast,
+    overload,
+    TYPE_CHECKING,
+)
+
+from pyspark import since
+from pyspark.ml.param import P
+from pyspark.ml.common import inherit_doc
+from pyspark.ml.param.shared import (
+    HasInputCol,
+    HasOutputCol,
+    HasLabelCol,
+    HasFeaturesCol,
+    HasPredictionCol,
+)
+from pyspark.sql.dataframe import DataFrame

Review Comment:
   Not a big issue, the DataFrame class is only used for type hint in the code



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "zhengruifeng (via GitHub)" <gi...@apache.org>.
zhengruifeng commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163524924


##########
python/pyspark/mlv2/base.py:
##########
@@ -0,0 +1,426 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from abc import ABCMeta, abstractmethod
+
+import copy
+import threading
+
+from typing import (
+    Any,
+    Callable,
+    Generic,
+    Iterator,
+    List,
+    Optional,
+    Sequence,
+    Tuple,
+    TypeVar,
+    Union,
+    cast,
+    overload,
+    TYPE_CHECKING,
+)
+
+from pyspark import since
+from pyspark.ml.param import P
+from pyspark.ml.common import inherit_doc
+from pyspark.ml.param.shared import (
+    HasInputCol,
+    HasOutputCol,
+    HasLabelCol,
+    HasFeaturesCol,
+    HasPredictionCol,
+)
+from pyspark.sql.dataframe import DataFrame

Review Comment:
   > I am thinking that we might need a parent class for both too
   +1, FYI the new ML implementations will only use pure Dataframe/Column/Functions APIs, in order to support both regular spark and spark connect.



##########
python/pyspark/mlv2/base.py:
##########
@@ -0,0 +1,426 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from abc import ABCMeta, abstractmethod
+
+import copy
+import threading
+
+from typing import (
+    Any,
+    Callable,
+    Generic,
+    Iterator,
+    List,
+    Optional,
+    Sequence,
+    Tuple,
+    TypeVar,
+    Union,
+    cast,
+    overload,
+    TYPE_CHECKING,
+)
+
+from pyspark import since
+from pyspark.ml.param import P
+from pyspark.ml.common import inherit_doc
+from pyspark.ml.param.shared import (
+    HasInputCol,
+    HasOutputCol,
+    HasLabelCol,
+    HasFeaturesCol,
+    HasPredictionCol,
+)
+from pyspark.sql.dataframe import DataFrame

Review Comment:
   > I am thinking that we might need a parent class for both too
   
   +1, FYI the new ML implementations will only use pure Dataframe/Column/Functions APIs, in order to support both regular spark and spark connect.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] WeichenXu123 commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "WeichenXu123 (via GitHub)" <gi...@apache.org>.
WeichenXu123 commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163576856


##########
python/pyspark/mlv2/classification/logistic_regression.py:
##########
@@ -0,0 +1,190 @@
+import numpy as np
+import math
+from pyspark.mlv2.base import Estimator, Model
+
+from pyspark.sql import DataFrame
+
+from pyspark.ml.param import (
+    Param,
+    Params,
+    TypeConverters,
+)
+from pyspark.ml.torch.distributor import TorchDistributor
+from pyspark.mlv2.classification.base import _ProbabilisticClassifierParams
+from pyspark.ml.param.shared import (
+    HasRegParam,
+    HasElasticNetParam,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+)
+from pyspark.mlv2.common_params import (
+    HasUseGPU,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+)
+from pyspark.sql.functions import lit, count, countDistinct
+
+import torch
+import torch.nn as torch_nn
+import torch.nn.functional as torch_fn
+
+
+class _LogisticRegressionParams(
+    _ProbabilisticClassifierParams,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+):
+    """
+    Params for :py:class:`LogisticRegression` and :py:class:`LogisticRegressionModel`.
+
+    .. versionadded:: 3.0.0
+    """
+
+    def __init__(self, *args: Any):
+        super(_LogisticRegressionParams, self).__init__(*args)
+        self._setDefault(
+            maxIter=100,
+            tol=1e-6,
+            numTrainWorkers=1,
+            numBatchSize=32,
+            learning_rate=0.001,
+            momentum=0.9,
+        )
+
+
+class _Net(torch_nn.Module):
+    def __init__(self, num_features, num_labels, bias) -> None:
+        super(_Net, self).__init__()
+
+        if num_labels > 2:
+            self.is_multinomial = True
+            output_dim = num_labels
+        else:
+            self.is_multinomial = False
+            output_dim = 1
+
+        self.fc = torch_nn.Linear(num_features, output_dim, bias=bias)
+
+    def forward(self, x: Any) -> Any:
+        output = self.fc(x)
+        if not self.is_multinomial:
+            output = torch.sigmoid(output).squeeze()
+        return output
+
+
+def _train_worker_fn(
+    num_samples_per_worker,
+    num_features,
+    batch_size,
+    max_iter,
+    num_labels,
+    learning_rate,
+    momentum,
+    fit_intercept,
+):
+    from pyspark.ml.torch.distributor import get_spark_partition_data_loader
+    from torch.nn.parallel import DistributedDataParallel as DDP
+    import torch.distributed
+    import torch.optim as optim
+
+    torch.distributed.init_process_group("gloo")
+
+    ddp_model = DDP(_Net(
+        num_features=num_features,
+        num_labels=num_labels,
+        bias=fit_intercept
+    ))
+
+    if num_labels > 2:
+        loss_fn = torch_nn.CrossEntropyLoss()
+    else:
+        loss_fn = torch_nn.BCELoss()
+
+    optimizer = optim.SGD(ddp_model.parameters(), lr=learning_rate, momentum=momentum)
+    data_loader = get_spark_partition_data_loader(num_samples_per_worker, batch_size)
+    for i in range(max_iter):
+        ddp_model.train()
+        for x, target in data_loader:
+            optimizer.zero_grad()
+            output = ddp_model(x)
+            loss_fn(output, target).backward()
+            optimizer.step()
+
+        # TODO: early stopping
+        #  When each epoch ends, computes loss on validation dataset and compare
+        #  current epoch validation loss with last epoch validation loss, if
+        #  less than provided `tol`, stop training.
+
+    if torch.distributed.get_rank() == 0:
+        return ddp_model.module.state_dict()
+
+    return None
+
+
+class LogisticRegression(Estimator["LogisticRegressionModel"], _LogisticRegressionParams):
+
+    def _fit(self, dataset: DataFrame) -> "LogisticRegressionModel":
+
+        num_train_workers = self.getNumTrainWorkers()
+        batch_size = self.getBatchSize()
+
+        # Q: Shall we persist the shuffled dataset ?
+        # shuffling results are already cached
+        dataset = (
+            dataset
+            .select(self.getFeaturesCol(), self.getLabelCol())
+            .repartition(num_train_workers)
+            .persist()

Review Comment:
   @zhengruifeng 
   
   My thoughts [P1/P2, not urgent]:
   
   In spark sql side, can we support a new kind of spark dataframe reader, that loads a spark dataframe from saved parquets/delta files, but the loaded dataframe is roughly evenly distributed with specific partition number, so that we can avoid the "repartition" (shuffle) step in this case.
   
   Proposed API is like:
   
   `spark.read.setLoadedNumPartitions(N).load(....)`
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] WeichenXu123 commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "WeichenXu123 (via GitHub)" <gi...@apache.org>.
WeichenXu123 commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163575167


##########
python/pyspark/mlv2/classification/logistic_regression.py:
##########
@@ -0,0 +1,190 @@
+import numpy as np
+import math
+from pyspark.mlv2.base import Estimator, Model
+
+from pyspark.sql import DataFrame
+
+from pyspark.ml.param import (
+    Param,
+    Params,
+    TypeConverters,
+)
+from pyspark.ml.torch.distributor import TorchDistributor
+from pyspark.mlv2.classification.base import _ProbabilisticClassifierParams
+from pyspark.ml.param.shared import (
+    HasRegParam,
+    HasElasticNetParam,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+)
+from pyspark.mlv2.common_params import (
+    HasUseGPU,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+)
+from pyspark.sql.functions import lit, count, countDistinct
+
+import torch
+import torch.nn as torch_nn
+import torch.nn.functional as torch_fn
+
+
+class _LogisticRegressionParams(
+    _ProbabilisticClassifierParams,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+):
+    """
+    Params for :py:class:`LogisticRegression` and :py:class:`LogisticRegressionModel`.
+
+    .. versionadded:: 3.0.0
+    """
+
+    def __init__(self, *args: Any):
+        super(_LogisticRegressionParams, self).__init__(*args)
+        self._setDefault(
+            maxIter=100,
+            tol=1e-6,
+            numTrainWorkers=1,
+            numBatchSize=32,
+            learning_rate=0.001,
+            momentum=0.9,
+        )
+
+
+class _Net(torch_nn.Module):
+    def __init__(self, num_features, num_labels, bias) -> None:
+        super(_Net, self).__init__()
+
+        if num_labels > 2:
+            self.is_multinomial = True
+            output_dim = num_labels
+        else:
+            self.is_multinomial = False
+            output_dim = 1
+
+        self.fc = torch_nn.Linear(num_features, output_dim, bias=bias)
+
+    def forward(self, x: Any) -> Any:
+        output = self.fc(x)
+        if not self.is_multinomial:
+            output = torch.sigmoid(output).squeeze()
+        return output
+
+
+def _train_worker_fn(
+    num_samples_per_worker,
+    num_features,
+    batch_size,
+    max_iter,
+    num_labels,
+    learning_rate,
+    momentum,
+    fit_intercept,
+):
+    from pyspark.ml.torch.distributor import get_spark_partition_data_loader
+    from torch.nn.parallel import DistributedDataParallel as DDP
+    import torch.distributed
+    import torch.optim as optim
+
+    torch.distributed.init_process_group("gloo")
+
+    ddp_model = DDP(_Net(
+        num_features=num_features,
+        num_labels=num_labels,
+        bias=fit_intercept
+    ))
+
+    if num_labels > 2:
+        loss_fn = torch_nn.CrossEntropyLoss()
+    else:
+        loss_fn = torch_nn.BCELoss()
+
+    optimizer = optim.SGD(ddp_model.parameters(), lr=learning_rate, momentum=momentum)
+    data_loader = get_spark_partition_data_loader(num_samples_per_worker, batch_size)
+    for i in range(max_iter):
+        ddp_model.train()
+        for x, target in data_loader:
+            optimizer.zero_grad()
+            output = ddp_model(x)
+            loss_fn(output, target).backward()
+            optimizer.step()
+
+        # TODO: early stopping
+        #  When each epoch ends, computes loss on validation dataset and compare
+        #  current epoch validation loss with last epoch validation loss, if
+        #  less than provided `tol`, stop training.

Review Comment:
   we can add objective history in following PR. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] WeichenXu123 closed pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "WeichenXu123 (via GitHub)" <gi...@apache.org>.
WeichenXu123 closed pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor
URL: https://github.com/apache/spark/pull/40748


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] WeichenXu123 commented on pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "WeichenXu123 (via GitHub)" <gi...@apache.org>.
WeichenXu123 commented on PR #40748:
URL: https://github.com/apache/spark/pull/40748#issuecomment-1568294541

   New PR https://github.com/apache/spark/pull/41383


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "zhengruifeng (via GitHub)" <gi...@apache.org>.
zhengruifeng commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163480629


##########
python/pyspark/mlv2/classification/logistic_regression.py:
##########
@@ -0,0 +1,190 @@
+import numpy as np
+import math
+from pyspark.mlv2.base import Estimator, Model
+
+from pyspark.sql import DataFrame
+
+from pyspark.ml.param import (
+    Param,
+    Params,
+    TypeConverters,
+)
+from pyspark.ml.torch.distributor import TorchDistributor
+from pyspark.mlv2.classification.base import _ProbabilisticClassifierParams
+from pyspark.ml.param.shared import (
+    HasRegParam,
+    HasElasticNetParam,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+)
+from pyspark.mlv2.common_params import (
+    HasUseGPU,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+)
+from pyspark.sql.functions import lit, count, countDistinct
+
+import torch
+import torch.nn as torch_nn
+import torch.nn.functional as torch_fn
+
+
+class _LogisticRegressionParams(
+    _ProbabilisticClassifierParams,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+):
+    """
+    Params for :py:class:`LogisticRegression` and :py:class:`LogisticRegressionModel`.
+
+    .. versionadded:: 3.0.0
+    """
+
+    def __init__(self, *args: Any):
+        super(_LogisticRegressionParams, self).__init__(*args)
+        self._setDefault(
+            maxIter=100,
+            tol=1e-6,
+            numTrainWorkers=1,
+            numBatchSize=32,
+            learning_rate=0.001,
+            momentum=0.9,
+        )
+
+
+class _Net(torch_nn.Module):
+    def __init__(self, num_features, num_labels, bias) -> None:
+        super(_Net, self).__init__()
+
+        if num_labels > 2:
+            self.is_multinomial = True
+            output_dim = num_labels
+        else:
+            self.is_multinomial = False
+            output_dim = 1
+
+        self.fc = torch_nn.Linear(num_features, output_dim, bias=bias)
+
+    def forward(self, x: Any) -> Any:
+        output = self.fc(x)
+        if not self.is_multinomial:
+            output = torch.sigmoid(output).squeeze()
+        return output
+
+
+def _train_worker_fn(
+    num_samples_per_worker,
+    num_features,
+    batch_size,
+    max_iter,
+    num_labels,
+    learning_rate,
+    momentum,
+    fit_intercept,
+):
+    from pyspark.ml.torch.distributor import get_spark_partition_data_loader
+    from torch.nn.parallel import DistributedDataParallel as DDP
+    import torch.distributed
+    import torch.optim as optim
+
+    torch.distributed.init_process_group("gloo")
+
+    ddp_model = DDP(_Net(
+        num_features=num_features,
+        num_labels=num_labels,
+        bias=fit_intercept
+    ))
+
+    if num_labels > 2:
+        loss_fn = torch_nn.CrossEntropyLoss()
+    else:
+        loss_fn = torch_nn.BCELoss()
+
+    optimizer = optim.SGD(ddp_model.parameters(), lr=learning_rate, momentum=momentum)
+    data_loader = get_spark_partition_data_loader(num_samples_per_worker, batch_size)
+    for i in range(max_iter):
+        ddp_model.train()
+        for x, target in data_loader:
+            optimizer.zero_grad()
+            output = ddp_model(x)
+            loss_fn(output, target).backward()
+            optimizer.step()
+
+        # TODO: early stopping
+        #  When each epoch ends, computes loss on validation dataset and compare
+        #  current epoch validation loss with last epoch validation loss, if
+        #  less than provided `tol`, stop training.

Review Comment:
   do we need to store the objective curve here?  or users can see the curve in tensorboard?
   



##########
python/pyspark/mlv2/classification/logistic_regression.py:
##########
@@ -0,0 +1,190 @@
+import numpy as np
+import math
+from pyspark.mlv2.base import Estimator, Model
+
+from pyspark.sql import DataFrame
+
+from pyspark.ml.param import (
+    Param,
+    Params,
+    TypeConverters,
+)
+from pyspark.ml.torch.distributor import TorchDistributor
+from pyspark.mlv2.classification.base import _ProbabilisticClassifierParams
+from pyspark.ml.param.shared import (
+    HasRegParam,
+    HasElasticNetParam,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+)
+from pyspark.mlv2.common_params import (
+    HasUseGPU,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+)
+from pyspark.sql.functions import lit, count, countDistinct
+
+import torch
+import torch.nn as torch_nn
+import torch.nn.functional as torch_fn
+
+
+class _LogisticRegressionParams(
+    _ProbabilisticClassifierParams,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+):
+    """
+    Params for :py:class:`LogisticRegression` and :py:class:`LogisticRegressionModel`.
+
+    .. versionadded:: 3.0.0
+    """
+
+    def __init__(self, *args: Any):
+        super(_LogisticRegressionParams, self).__init__(*args)
+        self._setDefault(
+            maxIter=100,
+            tol=1e-6,
+            numTrainWorkers=1,
+            numBatchSize=32,
+            learning_rate=0.001,
+            momentum=0.9,
+        )
+
+
+class _Net(torch_nn.Module):
+    def __init__(self, num_features, num_labels, bias) -> None:
+        super(_Net, self).__init__()
+
+        if num_labels > 2:
+            self.is_multinomial = True
+            output_dim = num_labels
+        else:
+            self.is_multinomial = False
+            output_dim = 1
+
+        self.fc = torch_nn.Linear(num_features, output_dim, bias=bias)
+
+    def forward(self, x: Any) -> Any:
+        output = self.fc(x)
+        if not self.is_multinomial:
+            output = torch.sigmoid(output).squeeze()
+        return output
+
+
+def _train_worker_fn(
+    num_samples_per_worker,
+    num_features,
+    batch_size,
+    max_iter,
+    num_labels,
+    learning_rate,
+    momentum,
+    fit_intercept,
+):
+    from pyspark.ml.torch.distributor import get_spark_partition_data_loader
+    from torch.nn.parallel import DistributedDataParallel as DDP
+    import torch.distributed
+    import torch.optim as optim
+
+    torch.distributed.init_process_group("gloo")
+
+    ddp_model = DDP(_Net(
+        num_features=num_features,
+        num_labels=num_labels,
+        bias=fit_intercept
+    ))
+
+    if num_labels > 2:
+        loss_fn = torch_nn.CrossEntropyLoss()
+    else:
+        loss_fn = torch_nn.BCELoss()
+
+    optimizer = optim.SGD(ddp_model.parameters(), lr=learning_rate, momentum=momentum)
+    data_loader = get_spark_partition_data_loader(num_samples_per_worker, batch_size)
+    for i in range(max_iter):
+        ddp_model.train()
+        for x, target in data_loader:
+            optimizer.zero_grad()
+            output = ddp_model(x)
+            loss_fn(output, target).backward()
+            optimizer.step()
+
+        # TODO: early stopping
+        #  When each epoch ends, computes loss on validation dataset and compare
+        #  current epoch validation loss with last epoch validation loss, if
+        #  less than provided `tol`, stop training.
+
+    if torch.distributed.get_rank() == 0:
+        return ddp_model.module.state_dict()
+
+    return None
+
+
+class LogisticRegression(Estimator["LogisticRegressionModel"], _LogisticRegressionParams):
+
+    def _fit(self, dataset: DataFrame) -> "LogisticRegressionModel":
+
+        num_train_workers = self.getNumTrainWorkers()
+        batch_size = self.getBatchSize()
+
+        # Q: Shall we persist the shuffled dataset ?
+        # shuffling results are already cached
+        dataset = (
+            dataset
+            .select(self.getFeaturesCol(), self.getLabelCol())
+            .repartition(num_train_workers)
+            .persist()

Review Comment:
   the dataset is used only twice?
   1, to compute `num_rows` and `num_labels`;
   2, write to local arrow files.
   
   If so, I think we don't need to cache it. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] WeichenXu123 commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "WeichenXu123 (via GitHub)" <gi...@apache.org>.
WeichenXu123 commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163576856


##########
python/pyspark/mlv2/classification/logistic_regression.py:
##########
@@ -0,0 +1,190 @@
+import numpy as np
+import math
+from pyspark.mlv2.base import Estimator, Model
+
+from pyspark.sql import DataFrame
+
+from pyspark.ml.param import (
+    Param,
+    Params,
+    TypeConverters,
+)
+from pyspark.ml.torch.distributor import TorchDistributor
+from pyspark.mlv2.classification.base import _ProbabilisticClassifierParams
+from pyspark.ml.param.shared import (
+    HasRegParam,
+    HasElasticNetParam,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+)
+from pyspark.mlv2.common_params import (
+    HasUseGPU,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+)
+from pyspark.sql.functions import lit, count, countDistinct
+
+import torch
+import torch.nn as torch_nn
+import torch.nn.functional as torch_fn
+
+
+class _LogisticRegressionParams(
+    _ProbabilisticClassifierParams,
+    HasMaxIter,
+    HasFitIntercept,
+    HasTol,
+    HasWeightCol,
+    HasNumTrainWorkers,
+    HasBatchSize,
+    HasLearningRate,
+    HasMomentum,
+):
+    """
+    Params for :py:class:`LogisticRegression` and :py:class:`LogisticRegressionModel`.
+
+    .. versionadded:: 3.0.0
+    """
+
+    def __init__(self, *args: Any):
+        super(_LogisticRegressionParams, self).__init__(*args)
+        self._setDefault(
+            maxIter=100,
+            tol=1e-6,
+            numTrainWorkers=1,
+            numBatchSize=32,
+            learning_rate=0.001,
+            momentum=0.9,
+        )
+
+
+class _Net(torch_nn.Module):
+    def __init__(self, num_features, num_labels, bias) -> None:
+        super(_Net, self).__init__()
+
+        if num_labels > 2:
+            self.is_multinomial = True
+            output_dim = num_labels
+        else:
+            self.is_multinomial = False
+            output_dim = 1
+
+        self.fc = torch_nn.Linear(num_features, output_dim, bias=bias)
+
+    def forward(self, x: Any) -> Any:
+        output = self.fc(x)
+        if not self.is_multinomial:
+            output = torch.sigmoid(output).squeeze()
+        return output
+
+
+def _train_worker_fn(
+    num_samples_per_worker,
+    num_features,
+    batch_size,
+    max_iter,
+    num_labels,
+    learning_rate,
+    momentum,
+    fit_intercept,
+):
+    from pyspark.ml.torch.distributor import get_spark_partition_data_loader
+    from torch.nn.parallel import DistributedDataParallel as DDP
+    import torch.distributed
+    import torch.optim as optim
+
+    torch.distributed.init_process_group("gloo")
+
+    ddp_model = DDP(_Net(
+        num_features=num_features,
+        num_labels=num_labels,
+        bias=fit_intercept
+    ))
+
+    if num_labels > 2:
+        loss_fn = torch_nn.CrossEntropyLoss()
+    else:
+        loss_fn = torch_nn.BCELoss()
+
+    optimizer = optim.SGD(ddp_model.parameters(), lr=learning_rate, momentum=momentum)
+    data_loader = get_spark_partition_data_loader(num_samples_per_worker, batch_size)
+    for i in range(max_iter):
+        ddp_model.train()
+        for x, target in data_loader:
+            optimizer.zero_grad()
+            output = ddp_model(x)
+            loss_fn(output, target).backward()
+            optimizer.step()
+
+        # TODO: early stopping
+        #  When each epoch ends, computes loss on validation dataset and compare
+        #  current epoch validation loss with last epoch validation loss, if
+        #  less than provided `tol`, stop training.
+
+    if torch.distributed.get_rank() == 0:
+        return ddp_model.module.state_dict()
+
+    return None
+
+
+class LogisticRegression(Estimator["LogisticRegressionModel"], _LogisticRegressionParams):
+
+    def _fit(self, dataset: DataFrame) -> "LogisticRegressionModel":
+
+        num_train_workers = self.getNumTrainWorkers()
+        batch_size = self.getBatchSize()
+
+        # Q: Shall we persist the shuffled dataset ?
+        # shuffling results are already cached
+        dataset = (
+            dataset
+            .select(self.getFeaturesCol(), self.getLabelCol())
+            .repartition(num_train_workers)
+            .persist()

Review Comment:
   @zhengruifeng 
   
   My thoughts:
   
   In spark sql side, can we support a new kind of spark dataframe reader, that loads a spark dataframe from saved parquets/delta files, but the loaded dataframe is roughly evenly distributed with specific partition number, so that we can avoid the "repartition" (shuffle) step in this case.
   
   Proposed API is like:
   
   `spark.read.setLoadedNumPartitions(N).load(....)`
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] jaceklaskowski commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "jaceklaskowski (via GitHub)" <gi...@apache.org>.
jaceklaskowski commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1166887971


##########
python/pyspark/ml/torch/tests/test_data_loader.py:
##########
@@ -0,0 +1,131 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import contextlib
+import os
+import shutil
+from six import StringIO
+import stat
+import subprocess
+import sys
+import time
+import tempfile
+import threading
+import numpy as np
+from typing import Callable, Dict, Any
+import unittest
+from unittest.mock import patch
+
+have_torch = True

Review Comment:
   nit: Should this really be between `import`s?



##########
python/pyspark/ml/torch/tests/test_data_loader.py:
##########
@@ -0,0 +1,131 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import contextlib
+import os
+import shutil
+from six import StringIO
+import stat
+import subprocess
+import sys
+import time
+import tempfile
+import threading
+import numpy as np
+from typing import Callable, Dict, Any
+import unittest
+from unittest.mock import patch
+
+have_torch = True
+try:
+    import torch  # noqa: F401
+except ImportError:
+    have_torch = False
+
+from pyspark import SparkConf, SparkContext
+from pyspark.ml.torch.distributor import TorchDistributor, get_gpus_owned, get_spark_partition_data_loader
+from pyspark.ml.torch.torch_run_process_wrapper import clean_and_terminate, check_parent_alive
+from pyspark.sql import SparkSession
+from pyspark.testing.utils import SPARK_HOME
+from pyspark.ml.linalg import Vectors
+
+
+@unittest.skipIf(not have_torch, "torch is required")
+class TorchDistributorDataLoaderUnitTests(unittest.TestCase):
+    def setUp(self) -> None:
+        self.spark = (
+            SparkSession.builder
+            .master("local[1]")

Review Comment:
   nit: indent right?



##########
python/pyspark/ml/torch/tests/test_data_loader.py:
##########
@@ -0,0 +1,131 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import contextlib
+import os
+import shutil
+from six import StringIO
+import stat
+import subprocess
+import sys
+import time
+import tempfile
+import threading
+import numpy as np
+from typing import Callable, Dict, Any
+import unittest
+from unittest.mock import patch
+
+have_torch = True
+try:
+    import torch  # noqa: F401
+except ImportError:
+    have_torch = False
+
+from pyspark import SparkConf, SparkContext
+from pyspark.ml.torch.distributor import TorchDistributor, get_gpus_owned, get_spark_partition_data_loader
+from pyspark.ml.torch.torch_run_process_wrapper import clean_and_terminate, check_parent_alive
+from pyspark.sql import SparkSession
+from pyspark.testing.utils import SPARK_HOME
+from pyspark.ml.linalg import Vectors
+
+
+@unittest.skipIf(not have_torch, "torch is required")

Review Comment:
   I think we need a (stacked) decorator to do torch check and skip if not available.



##########
python/pyspark/ml/torch/tests/test_data_loader.py:
##########
@@ -0,0 +1,131 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import contextlib
+import os
+import shutil
+from six import StringIO
+import stat
+import subprocess
+import sys
+import time
+import tempfile
+import threading
+import numpy as np
+from typing import Callable, Dict, Any
+import unittest
+from unittest.mock import patch
+
+have_torch = True
+try:
+    import torch  # noqa: F401
+except ImportError:
+    have_torch = False
+
+from pyspark import SparkConf, SparkContext
+from pyspark.ml.torch.distributor import TorchDistributor, get_gpus_owned, get_spark_partition_data_loader
+from pyspark.ml.torch.torch_run_process_wrapper import clean_and_terminate, check_parent_alive
+from pyspark.sql import SparkSession
+from pyspark.testing.utils import SPARK_HOME
+from pyspark.ml.linalg import Vectors
+
+
+@unittest.skipIf(not have_torch, "torch is required")
+class TorchDistributorDataLoaderUnitTests(unittest.TestCase):
+    def setUp(self) -> None:
+        self.spark = (
+            SparkSession.builder
+            .master("local[1]")
+            .config("spark.default.parallelism", "1")

Review Comment:
   Is this required since `master("local[1]")`?



##########
python/pyspark/mlv2/classification/base.py:
##########
@@ -0,0 +1,45 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from pyspark.mlv2.base import _PredictorParams
+
+from pyspark.ml.param.shared import (
+    HasRawPredictionCol,
+    HasProbabilityCol,
+    HasThresholds
+)
+
+
+class _ClassifierParams(HasRawPredictionCol, _PredictorParams):
+    """
+    Classifier Params for classification tasks.
+
+    .. versionadded:: 3.0.0

Review Comment:
   nit: 3.5.0 or even higher?



##########
python/pyspark/mlv2/classification/logistic_regression.py:
##########
@@ -0,0 +1,190 @@
+import numpy as np

Review Comment:
   nit: The ASF copyright header



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #40748: [WIP][SPARK-43097] New pyspark ML logistic regression estimator implemented on top of distributor

Posted by "zhengruifeng (via GitHub)" <gi...@apache.org>.
zhengruifeng commented on code in PR #40748:
URL: https://github.com/apache/spark/pull/40748#discussion_r1163502035


##########
python/pyspark/mlv2/base.py:
##########
@@ -0,0 +1,426 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from abc import ABCMeta, abstractmethod
+
+import copy
+import threading
+
+from typing import (
+    Any,
+    Callable,
+    Generic,
+    Iterator,
+    List,
+    Optional,
+    Sequence,
+    Tuple,
+    TypeVar,
+    Union,
+    cast,
+    overload,
+    TYPE_CHECKING,
+)
+
+from pyspark import since
+from pyspark.ml.param import P
+from pyspark.ml.common import inherit_doc
+from pyspark.ml.param.shared import (
+    HasInputCol,
+    HasOutputCol,
+    HasLabelCol,
+    HasFeaturesCol,
+    HasPredictionCol,
+)
+from pyspark.sql.dataframe import DataFrame

Review Comment:
   will we have a `GenericDataFrame`? @HyukjinKwon 
   
   I feel here should be something like:
   ```
   if not is_remote():
       from pyspark.sql.dataframe import DataFrame
   else:
       from pyspark.sql.connect.dataframe import DataFrame
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org