You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "M. Le Bihan (JIRA)" <ji...@apache.org> on 2018/12/06 13:41:00 UTC

[jira] [Updated] (SPARK-26296) Spark build over Java and not over Scala, offering Scala as an option over Spark

     [ https://issues.apache.org/jira/browse/SPARK-26296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

M. Le Bihan updated SPARK-26296:
--------------------------------
    Description: 
I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I believe those using _PySpark_ no more. 

 

But _Spark_ as been build over _Scala_ instead of plain _Java_ and its a cause troubles, especially at the time of leveling JDK. We are awaiting to use JDK 11 and _Scala_ is still lowering _Spark_ in the previous version of the JDK.

_Big Data_ programming shall not force developpers to get by with _Scala_ when its not the language they have choosen.

 

Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ without _Hadoop,_ would confort me : a cause of issues would disappear.

Provide an optional _spark-scala_ artifact would be fine as those that do not need it won't download it. In the same move, you would to return to the generation of standard javadocs for Java classes documentation.

 

  was:
I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I believe those using _PySpark_ no more. 

 

But Spark as been build over _Scala_ instead of plain _Java_ and its a cause troubles, especially at the time of leveling JDK. We are awaiting to use JDK 11 and _Scala_ is still lowering _Spark_ in the previous version of the JDK.



Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ without _Hadoop,_ would confort me : a cause of issues would disappear.


> Spark build over Java and not over Scala, offering Scala as an option over Spark
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-26296
>                 URL: https://issues.apache.org/jira/browse/SPARK-26296
>             Project: Spark
>          Issue Type: Wish
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: M. Le Bihan
>            Priority: Minor
>
> I am not using _Scala_ when I am programming _Spark_ but plain _Java_. I believe those using _PySpark_ no more. 
>  
> But _Spark_ as been build over _Scala_ instead of plain _Java_ and its a cause troubles, especially at the time of leveling JDK. We are awaiting to use JDK 11 and _Scala_ is still lowering _Spark_ in the previous version of the JDK.
> _Big Data_ programming shall not force developpers to get by with _Scala_ when its not the language they have choosen.
>  
> Having a _Spark_ without _Scala_, like it is possible to have a _Spark_ without _Hadoop,_ would confort me : a cause of issues would disappear.
> Provide an optional _spark-scala_ artifact would be fine as those that do not need it won't download it. In the same move, you would to return to the generation of standard javadocs for Java classes documentation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org