You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marco Gaido (JIRA)" <ji...@apache.org> on 2018/08/17 08:34:00 UTC

[jira] [Resolved] (SPARK-25138) Spark Shell should show the Scala prompt after initialization is complete

     [ https://issues.apache.org/jira/browse/SPARK-25138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marco Gaido resolved SPARK-25138.
---------------------------------
    Resolution: Duplicate

> Spark Shell should show the Scala prompt after initialization is complete
> -------------------------------------------------------------------------
>
>                 Key: SPARK-25138
>                 URL: https://issues.apache.org/jira/browse/SPARK-25138
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 2.4.0
>            Reporter: Kris Mok
>            Priority: Minor
>
> In previous Spark versions, the Spark Shell used to only show the Scala prompt *after* Spark has initialized. i.e. when the user is able to enter code, the Spark context, Spark session etc have all completed initialization, so {{sc}}, {{spark}} are all ready to use.
> In the current Spark master branch (to become Spark 2.4.0), the Scala prompt shows up immediately, while Spark itself is still in initialization in the background. It's very easy for the user to feel as if the shell is ready and start typing, only to find that Spark isn't ready yet, and Spark's initialization logs get in the way of typing. This new behavior is rather annoying from a usability's perspective.
> A typical startup of the Spark Shell in current master:
> {code:none}
> $ bin/spark-shell
> 18/08/16 23:18:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 2.4.0-SNAPSHOT
>       /_/
>          
> Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> spark.range(1)Spark context Web UI available at http://localhost:4040
> Spark context available as 'sc' (master = local[*], app id = local-1534486692744).
> Spark session available as 'spark'.
> .show
> +---+
> | id|
> +---+
> |  0|
> +---+
> scala> 
> {code}
> Could you see that it was running {{spark.range(1).show}} ?
> In contrast, previous versions of Spark Shell would wait for Spark to fully initialization:
> {code:none}
> $ bin/spark-shell
> 18/08/16 23:20:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
> Spark context Web UI available at http://10.0.0.76:4040
> Spark context available as 'sc' (master = local[*], app id = local-1534486813159).
> Spark session available as 'spark'.
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 2.3.3-SNAPSHOT
>       /_/
>          
> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> spark.range(1).show
> +---+
> | id|
> +---+
> |  0|
> +---+
> scala> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org