You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Joe Wass <jw...@crossref.org> on 2015/02/18 16:39:45 UTC

Issue SPARK-5008 (persistent-hdfs broken)

I've recently run into problems caused by ticket SPARK-5008

https://issues.apache.org/jira/browse/SPARK-5008

This seems like quite a serious regression in 1.2.0, meaning that it's not
really possible to use persistent-hdfs. The config for the persistent-hdfs
points to the wrong part of the filesystem, so it comes up on the wrong
volume (and therefore has the wrong capacity). I'm working around it with
symlinks, but it's not ideal.

It doesn't look like it's scheduled to be fixed in any particular release.
Is there any indication of whether this is on anyone's todo list?

If no-one's looking into it then I could try having a look myself, but I'm
not (yet) familiar with the internals. From the discussion on the ticket it
doesn't look like a huge fix.

Cheers

Joe