You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sam Steingold (JIRA)" <ji...@apache.org> on 2015/05/28 15:59:25 UTC
[jira] [Issue Comment Deleted] (SPARK-7898) pyspark merges stderr
into stdout
[ https://issues.apache.org/jira/browse/SPARK-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sam Steingold updated SPARK-7898:
---------------------------------
Comment: was deleted
(was: PySpark's stdio is neither here nor there.
The problem is that {{hadoop}}, when running under Spark, redirects its usual {{stderr}} into its own {{stdout}}.
When I run {{hadoop}} under {{time}}:
{code}
from subprocess import Popen
with open("out","w") as out:
with open("err","w") as err:
p = Popen(['usr/bin/time','hadoop','fs','-text',"/foo/bar/baz.bz2"],
stdin=None,stdout=out,stderr=err)
print p.wait()
{code}
the {{time}} output goes to {{err}} while {{hadoop}} logs, which usually go to {{stderr}}, end up in the {{out}} file together with the contents of my {{baz}} file.)
> pyspark merges stderr into stdout
> ---------------------------------
>
> Key: SPARK-7898
> URL: https://issues.apache.org/jira/browse/SPARK-7898
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 1.3.0
> Reporter: Sam Steingold
>
> When I type
> {code}
> hadoop fs -text /foo/bar/baz.bz2 2>err 1>out
> {code}
> I get two non-empty files: {{err}} with
> {code}
> 2015-05-26 15:33:49,786 INFO [main] bzip2.Bzip2Factory (Bzip2Factory.java:isNativeBzip2Loaded(70)) - Successfully loaded & initialized native-bzip2 library system-native
> 2015-05-26 15:33:49,789 INFO [main] compress.CodecPool (CodecPool.java:getDecompressor(179)) - Got brand-new decompressor [.bz2]
> {code}
> and {{out}} with the content of the file (as expected).
> When I call the same command from Python (2.6):
> {code}
> from subprocess import Popen
> with open("out","w") as out:
> with open("err","w") as err:
> p = Popen(['hadoop','fs','-text',"/foo/bar/baz.bz2"],
> stdin=None,stdout=out,stderr=err)
> print p.wait()
> {code}
> I get the exact same (correct) behavior.
> *However*, when I run the same code under *PySpark* (or using {{spark-submit}}), I get an *empty* {{err}} file and the {{out}} file starts with the log messages above (and then it contains the actual data).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org