You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Matei Zaharia (JIRA)" <ji...@apache.org> on 2014/08/28 04:49:58 UTC
[jira] [Resolved] (SPARK-3265) Allow using custom ipython
executable with pyspark
[ https://issues.apache.org/jira/browse/SPARK-3265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Matei Zaharia resolved SPARK-3265.
----------------------------------
Resolution: Fixed
Fix Version/s: (was: 1.0.2)
1.2.0
Assignee: Rob O'Dwyer
Target Version/s: 1.2.0 (was: 1.0.2)
> Allow using custom ipython executable with pyspark
> --------------------------------------------------
>
> Key: SPARK-3265
> URL: https://issues.apache.org/jira/browse/SPARK-3265
> Project: Spark
> Issue Type: Improvement
> Components: PySpark
> Affects Versions: 1.0.2, 1.1.0
> Reporter: Rob O'Dwyer
> Assignee: Rob O'Dwyer
> Priority: Minor
> Fix For: 1.2.0
>
>
> Although you can make pyspark use ipython with IPYTHON=1, and also change the python executable with PYSPARK_PYTHON=..., you can't use both at the same time because it hardcodes the default ipython script.
> This makes it use the PYSPARK_PYTHON variable if present and fall back to default python, similarly to how the default python executable is handled.
> So you can use a custom ipython like so:
> {{PYSPARK_PYTHON=./anaconda/bin/ipython IPYTHON_OPTS="notebook" pyspark}}
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org