You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/02/01 22:09:47 UTC

[GitHub] srowen commented on issue #23715: [SPARK-26803][PYTHON] Add sbin subdirectory to pyspark

srowen commented on issue #23715: [SPARK-26803][PYTHON] Add sbin subdirectory to pyspark
URL: https://github.com/apache/spark/pull/23715#issuecomment-459885159
 
 
   Yep I'm pretty sure now that the pip package has all those jars so you can run locally, so 'pyspark' does anything at all without a cluster. Local mode is just for testing and experiments; Spark doesn't have much point on one machine for anything 'real'. I don't think running a history server with local execution is a reasonable use case as a result.
   
   Maybe more to the point: those scripts don't work without a Spark distro; they expect SPARK_HOME to be set or to be run from within an unpacked distribution. Maybe you'll shock me again by saying it really does happen to work with the pip package layout too, but I've never understood that to be supported or the intent, something that's being maintained.
   
   Does the script even work when packaged this way?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org