You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Nazario (JIRA)" <ji...@apache.org> on 2015/05/27 17:35:17 UTC

[jira] [Created] (SPARK-7899) PySpark sql/tests breaks pylint validation

Michael Nazario created SPARK-7899:
--------------------------------------

             Summary: PySpark sql/tests breaks pylint validation
                 Key: SPARK-7899
                 URL: https://issues.apache.org/jira/browse/SPARK-7899
             Project: Spark
          Issue Type: Bug
          Components: PySpark, Tests
    Affects Versions: 1.4.0
            Reporter: Michael Nazario


The pyspark.sql.types module is dynamically named "types" from "_types" which messes up pylint validation

>From [~justin.uang] below:

In commit 04e44b37, the migration to Python 3, pyspark/sql/types.py was renamed to pyspark/sql/_types.py and then some magic in pyspark/sql/__init__.py dynamically renamed the module back to types. I imagine that this is some naming conflict with Python 3, but what was the error that showed up?

The reason why I'm asking about this is because it's messing with pylint, since pylint cannot now statically find the module. I tried also importing the package so that __init__ would be run in a init-hook, but that isn't what the discovery mechanism is using. I imagine it's probably just crawling the directory structure.

One way to work around this would be something akin to this (http://stackoverflow.com/questions/9602811/how-to-tell-pylint-to-ignore-certain-imports), where I would have to create a fake module, but I would probably be missing a ton of pylint features on users of that module, and it's pretty hacky.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org