You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/05/28 00:53:00 UTC
[jira] [Resolved] (SPARK-31763) DataFrame.inputFiles() not
Available
[ https://issues.apache.org/jira/browse/SPARK-31763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-31763.
----------------------------------
Fix Version/s: 3.1.0
Resolution: Fixed
Issue resolved by pull request 28652
[https://github.com/apache/spark/pull/28652]
> DataFrame.inputFiles() not Available
> ------------------------------------
>
> Key: SPARK-31763
> URL: https://issues.apache.org/jira/browse/SPARK-31763
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.4.5
> Reporter: Felix Kizhakkel Jose
> Assignee: Rakesh Raushan
> Priority: Major
> Fix For: 3.1.0
>
>
> I have been trying to list inputFiles that compose my DataSet by using *PySpark*
> spark_session.read
> .format(sourceFileFormat)
> .load(S3A_FILESYSTEM_PREFIX + bucket + File.separator + sourceFolderPrefix)
> *.inputFiles();*
> but I get an exception saying inputFiles attribute not present. But I was able to get this functionality with Spark Java.
> *So is this something missing in PySpark?*
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org