You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "John-Michael Reed (JIRA)" <ji...@apache.org> on 2016/07/07 20:35:10 UTC

[jira] [Created] (SPARK-16428) Spark file system watcher not working on Windows

John-Michael Reed created SPARK-16428:
-----------------------------------------

             Summary: Spark file system watcher not working on Windows
                 Key: SPARK-16428
                 URL: https://issues.apache.org/jira/browse/SPARK-16428
             Project: Spark
          Issue Type: Bug
          Components: Examples, Input/Output, Spark Core, Windows
    Affects Versions: 1.6.2
         Environment: Ubuntu 15.10 64 bit,  Windows 7 Enterprise 64 bit
            Reporter: John-Michael Reed
            Priority: Blocker


Two people tested Apache Spark on their computers...

[Spark Download - http://i.stack.imgur.com/z1oqu.png]

We downloaded the version of Spark prebuild for Hadoop 2.6, went to the folder /spark-1.6.2-bin-hadoop2.6/, created a "tmp" directory, went to that directory, and ran:

$ bin/run-example org.apache.spark.examples.streaming.HdfsWordCount tmp

I added arbitrary files content1 and content2dssdgdg to that "tmp" directory.

-------------------------------------------
Time: 1467921704000 ms
-------------------------------------------
(content1,1)
(content2dssdgdg,1)

-------------------------------------------
Time: 1467921706000 ms

Spark detected those files with the above terminal output on my Ubuntu 15.10 laptop, but not on my colleague's Windows laptop.

This is preventing us from getting work done with Spark.

Link: http://stackoverflow.com/questions/38254405/spark-file-system-watcher-not-working-on-windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org