You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "David Mollitor (Jira)" <ji...@apache.org> on 2021/01/27 18:27:00 UTC
[jira] [Created] (HIVE-24693) Parquet Timestamp Values Read/Write
Very Slow
David Mollitor created HIVE-24693:
-------------------------------------
Summary: Parquet Timestamp Values Read/Write Very Slow
Key: HIVE-24693
URL: https://issues.apache.org/jira/browse/HIVE-24693
Project: Hive
Issue Type: Improvement
Reporter: David Mollitor
Assignee: David Mollitor
Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a timestamp object into a binary value. The way in which it does this,... it calls {{toString()}} on the timestamp object, and then parses the String. This particular timestamp do not carry a timezone, so the string is something like:
{{2021-21-03 12:32:23.0000...}}
The parse code tries to parse the string assuming there is a time zone, and if not, falls-back and applies the provided "default time zone". As was noted in [HIVE-24353], if something fails to parse, it is very expensive to try to parse again. So, for each timestamp in the Parquet file, it:
* Builds a string from the time stamp
* Parses it (throws an exception, parses again)
There is no need to do this kind of string manipulations/parsing, it should just be using the epoch millis/seconds/time stored internal to the Timestamp object.
{code:java}
// Converts Timestamp to TimestampTZ.
public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
return parse(ts.toString(), defaultTimeZone);
}
{code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)