You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2018/11/07 15:37:00 UTC

[jira] [Updated] (SPARK-25908) Remove old deprecated items in Spark 3

     [ https://issues.apache.org/jira/browse/SPARK-25908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-25908:
------------------------------
    Description: 
There are many deprecated methods and classes in Spark. They _can_ be removed in Spark 3, and for those that have been deprecated a long time (i.e. since Spark <= 2.0), we should probably do so. This addresses most of these cases, the easiest ones, those that are easy to remove and are old:
 - Remove some AccumulableInfo .apply() methods
 - Remove non-label-specific multiclass precision/recall/fScore in favor of accuracy
 - Remove toDegrees/toRadians in favor of degrees/radians (SparkR: only deprecated)
 - Remove approxCountDistinct in favor of approx_count_distinct (SparkR: only deprecated)
 - Remove unused Python StorageLevel constants
 - Remove Dataset unionAll in favor of union
 - Remove unused multiclass option in libsvm parsing
 - Remove references to deprecated spark configs like spark.yarn.am.port
 - Remove TaskContext.isRunningLocally
 - Remove ShuffleMetrics.shuffle* methods
 - Remove BaseReadWrite.context in favor of session
 - Remove Column.!== in favor of =!=
 - Remove Dataset.explode
 - Remove Dataset.registerTempTable
 - Remove SQLContext.getOrCreate, setActive, clearActive, constructors

Not touched yet:
 - everything else in MLLib
 - HiveContext
 - Anything deprecated more recently than 2.0.0, generally

  was:
There are many deprecated methods and classes in Spark. They _can_ be removed in Spark 3, and for those that have been deprecated a long time (i.e. since Spark <= 2.0), we should probably do so. This addresses most of these cases, the easiest ones, those that are easy to remove and are old:

- Remove some AccumulableInfo .apply() methods
- Remove non-label-specific multiclass precision/recall/fScore in favor of accuracy
- Remove toDegrees/toRadians in favor of degrees/radians
- Remove approxCountDistinct in favor of approx_count_distinct
- Remove unused Python StorageLevel constants
- Remove Dataset unionAll in favor of union
- Remove unused multiclass option in libsvm parsing
- Remove references to deprecated spark configs like spark.yarn.am.port
- Remove TaskContext.isRunningLocally
- Remove ShuffleMetrics.shuffle* methods
- Remove BaseReadWrite.context in favor of session
- Remove Column.!== in favor of =!=
- Remove Dataset.explode
- Remove Dataset.registerTempTable
- Remove SQLContext.getOrCreate, setActive, clearActive, constructors

Not touched yet:

- everything else in MLLib
- HiveContext
- Anything deprecated more recently than 2.0.0, generally


> Remove old deprecated items in Spark 3
> --------------------------------------
>
>                 Key: SPARK-25908
>                 URL: https://issues.apache.org/jira/browse/SPARK-25908
>             Project: Spark
>          Issue Type: Task
>          Components: Spark Core, SQL
>    Affects Versions: 3.0.0
>            Reporter: Sean Owen
>            Assignee: Sean Owen
>            Priority: Major
>
> There are many deprecated methods and classes in Spark. They _can_ be removed in Spark 3, and for those that have been deprecated a long time (i.e. since Spark <= 2.0), we should probably do so. This addresses most of these cases, the easiest ones, those that are easy to remove and are old:
>  - Remove some AccumulableInfo .apply() methods
>  - Remove non-label-specific multiclass precision/recall/fScore in favor of accuracy
>  - Remove toDegrees/toRadians in favor of degrees/radians (SparkR: only deprecated)
>  - Remove approxCountDistinct in favor of approx_count_distinct (SparkR: only deprecated)
>  - Remove unused Python StorageLevel constants
>  - Remove Dataset unionAll in favor of union
>  - Remove unused multiclass option in libsvm parsing
>  - Remove references to deprecated spark configs like spark.yarn.am.port
>  - Remove TaskContext.isRunningLocally
>  - Remove ShuffleMetrics.shuffle* methods
>  - Remove BaseReadWrite.context in favor of session
>  - Remove Column.!== in favor of =!=
>  - Remove Dataset.explode
>  - Remove Dataset.registerTempTable
>  - Remove SQLContext.getOrCreate, setActive, clearActive, constructors
> Not touched yet:
>  - everything else in MLLib
>  - HiveContext
>  - Anything deprecated more recently than 2.0.0, generally



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org