You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/05/25 07:22:00 UTC
[jira] [Commented] (SPARK-31763) DataFrame.inputFiles() not
Available
[ https://issues.apache.org/jira/browse/SPARK-31763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115783#comment-17115783 ]
Hyukjin Kwon commented on SPARK-31763:
--------------------------------------
Are you interested in opening a PR for this?
> DataFrame.inputFiles() not Available
> ------------------------------------
>
> Key: SPARK-31763
> URL: https://issues.apache.org/jira/browse/SPARK-31763
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.4.5
> Reporter: Felix Kizhakkel Jose
> Priority: Major
>
> I have been trying to list inputFiles that compose my DataSet by using *PySpark*
> spark_session.read
> .format(sourceFileFormat)
> .load(S3A_FILESYSTEM_PREFIX + bucket + File.separator + sourceFolderPrefix)
> *.inputFiles();*
> but I get an exception saying inputFiles attribute not present. But I was able to get this functionality with Spark Java.
> *So is this something missing in PySpark?*
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org