You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@datafu.apache.org by "Russell Jurney (JIRA)" <ji...@apache.org> on 2019/03/12 01:49:00 UTC

[jira] [Comment Edited] (DATAFU-148) Setup Spark sub-project

    [ https://issues.apache.org/jira/browse/DATAFU-148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790113#comment-16790113 ] 

Russell Jurney edited comment on DATAFU-148 at 3/12/19 1:48 AM:
----------------------------------------------------------------

I appreciate your work on this and I'm trying to make it work in Python 3. There was a print statement that crashed previously. I'm using findspark to see if the files will run locally in Python from the shell. The pyspark_utils all do, trying to get tests to as well.

There was a 2.7 print statement in init_spark_context.py

There is no note about running tests for datafu-spark . in the master README.md

 


was (Author: russell.jurney):
I appreciate your work on this and I'm trying to make it work in Python 3. There was a print statement that crashed previously. I'm using findspark to see if the files will run locally in Python from the shell. The pyspark_utils all do, trying to get tests to as well. Will update.

> Setup Spark sub-project
> -----------------------
>
>                 Key: DATAFU-148
>                 URL: https://issues.apache.org/jira/browse/DATAFU-148
>             Project: DataFu
>          Issue Type: New Feature
>            Reporter: Eyal Allweil
>            Assignee: Eyal Allweil
>            Priority: Major
>         Attachments: patch.diff, patch.diff
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Create a skeleton Spark sub project for Spark code to be contributed to DataFu



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)