You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:16:11 UTC
[jira] [Resolved] (SPARK-16428) Spark file system watcher not
working on Windows
[ https://issues.apache.org/jira/browse/SPARK-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-16428.
----------------------------------
Resolution: Incomplete
> Spark file system watcher not working on Windows
> ------------------------------------------------
>
> Key: SPARK-16428
> URL: https://issues.apache.org/jira/browse/SPARK-16428
> Project: Spark
> Issue Type: Bug
> Components: Examples, Input/Output, Spark Core, Windows
> Affects Versions: 1.6.2
> Environment: Ubuntu 15.10 64 bit, Windows 7 Enterprise 64 bit
> Reporter: John-Michael Reed
> Priority: Major
> Labels: bulk-closed
>
> Two people tested Apache Spark on their computers...
> [Spark Download - http://i.stack.imgur.com/z1oqu.png]
> We downloaded the version of Spark prebuild for Hadoop 2.6, went to the folder /spark-1.6.2-bin-hadoop2.6/, created a "tmp" directory, went to that directory, and ran:
> $ bin/run-example org.apache.spark.examples.streaming.HdfsWordCount tmp
> I added arbitrary files content1 and content2dssdgdg to that "tmp" directory.
> -------------------------------------------
> Time: 1467921704000 ms
> -------------------------------------------
> (content1,1)
> (content2dssdgdg,1)
> -------------------------------------------
> Time: 1467921706000 ms
> Spark detected those files with the above terminal output on my Ubuntu 15.10 laptop, but not on my colleague's Windows 7 Enterprise laptop.
> This is preventing us from getting work done with Spark.
> Link: http://stackoverflow.com/questions/38254405/spark-file-system-watcher-not-working-on-windows
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org