You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Manoj Kumar (JIRA)" <ji...@apache.org> on 2015/07/02 09:41:04 UTC

[jira] [Commented] (SPARK-8706) Implement Pylint / Prospector checks for PySpark

    [ https://issues.apache.org/jira/browse/SPARK-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611611#comment-14611611 ] 

Manoj Kumar commented on SPARK-8706:
------------------------------------

Sorry for sounding dumb, but the present code downloads pep8 as a script. However it seems that pylint is a repo, which again has two dependencies. What is the preferred way to do this in Spark?

> Implement Pylint / Prospector checks for PySpark
> ------------------------------------------------
>
>                 Key: SPARK-8706
>                 URL: https://issues.apache.org/jira/browse/SPARK-8706
>             Project: Spark
>          Issue Type: New Feature
>          Components: Project Infra, PySpark
>            Reporter: Josh Rosen
>
> It would be nice to implement Pylint / Prospector (https://github.com/landscapeio/prospector) checks for PySpark. As with the style checker rules, I'll imagine that we'll want to roll out new rules gradually in order to avoid a mass refactoring commit.
> For starters, we should create a pull request that introduces the harness for running the linters, add a configuration file which enables only the lint checks that currently pass, and install the required dependencies on Jenkins. Once we've done this, we can open a series of smaller followup PRs to gradually enable more linting checks and to fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org