You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Pietro Pugni <pi...@gmail.com> on 2016/10/13 14:32:00 UTC

pyspark doesn't recognize MMM dateFormat pattern in spark.read.load() for dates like 1989Dec31 and 31Dec1989

Hi there,
I opened a question on StackOverflow at this link: http://stackoverflow.com/questions/40007972/pyspark-doesnt-recognize-mmm-dateformat-pattern-in-spark-read-load-for-dates?noredirect=1#comment67297930_40007972

I didn’t get any useful answer, so I’m writing here hoping that someone can help me.

In short, I’m trying to read a CSV containing data columns stored using the pattern “yyyyMMMdd”. What doesn’t work for me is “MMM”. I’ve done some testing and discovered that it’s a localization issue. As you can read from the StackOverflow question, I run a simple Java code to parse the date “1989Dec31” and it works only if I specify Locale.US in the SimpleDateFormat() function.

I would like pyspark to work. I tried setting a different local from console (LANG=“en_US”), but it doesn’t work. I tried also setting it using the locale package from Python.

So, there’s a way to set locale in Spark when using pyspark? The issue is Java related and not Python related (the function that parses data is invoked by spark.read.load(dateFormat=“yyyyMMMdd”, …). I don’t want to use other solutions in order to encode data because they are slower (from what I’ve seen so far).

Thank you
 Pietro
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org