You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Íñigo Goiri (Jira)" <ji...@apache.org> on 2020/03/24 21:51:00 UTC
[jira] [Resolved] (HDFS-15215) The Timestamp for longest write/read
lock held log is wrong
[ https://issues.apache.org/jira/browse/HDFS-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Íñigo Goiri resolved HDFS-15215.
--------------------------------
Fix Version/s: 3.3.0
Hadoop Flags: Reviewed
Resolution: Fixed
> The Timestamp for longest write/read lock held log is wrong
> -----------------------------------------------------------
>
> Key: HDFS-15215
> URL: https://issues.apache.org/jira/browse/HDFS-15215
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Toshihiro Suzuki
> Assignee: Toshihiro Suzuki
> Priority: Major
> Fix For: 3.3.0
>
>
> I found the Timestamp for longest write/read lock held log is wrong in trunk:
> {code}
> 2020-03-10 16:01:26,585 [main] INFO namenode.FSNamesystem (FSNamesystemLock.java:writeUnlock(281)) - Number of suppressed write-lock reports: 0
> Longest write-lock held at 1970-01-03 07:07:40,841+0900 for 3ms via java.lang.Thread.getStackTrace(Thread.java:1559)
> ...
> {code}
> Looking at the code, it looks like the timestamp comes from System.nanoTime() that returns the current value of the running Java Virtual Machine's high-resolution time source and this method can only be used to measure elapsed time:
> https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#nanoTime--
> We need to make the timestamp from System.currentTimeMillis().
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org