You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/08/25 08:47:22 UTC

[jira] [Resolved] (SPARK-17193) HadoopRDD NPE at DEBUG log level when getLocationInfo == null

     [ https://issues.apache.org/jira/browse/SPARK-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-17193.
-------------------------------
       Resolution: Fixed
    Fix Version/s: 2.1.0
                   2.0.1

Issue resolved by pull request 14760
[https://github.com/apache/spark/pull/14760]

> HadoopRDD NPE at DEBUG log level when getLocationInfo == null
> -------------------------------------------------------------
>
>                 Key: SPARK-17193
>                 URL: https://issues.apache.org/jira/browse/SPARK-17193
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.0.0
>            Reporter: Sean Owen
>            Assignee: Sean Owen
>            Priority: Trivial
>             Fix For: 2.0.1, 2.1.0
>
>
> When I set the log level to "DEBUG" in one of my apps that reads from Parquet, I notice several NullPointerExceptions logged from HadoopRDD.getPreferredLocations. 
> It doesn't affect executions as it just results in "no preferred locations". It happens when InputSplitWithLocationInfo.getLocationInfo produces null, which it may. The code just dereferences it however. 
> It's cleaner to check this directly (and maybe tighten up the code slightly) and avoid polluting the log, though, it's just at debug level. No big deal, but enough of an annoyance when I was debugging something that it's probably worth zapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org