You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Jinglun (JIRA)" <ji...@apache.org> on 2019/05/27 09:58:00 UTC

[jira] [Created] (HDFS-14515) The proto type of quota should change to int64.

Jinglun created HDFS-14515:
------------------------------

             Summary: The proto type of quota should change to int64.
                 Key: HDFS-14515
                 URL: https://issues.apache.org/jira/browse/HDFS-14515
             Project: Hadoop HDFS
          Issue Type: Improvement
            Reporter: Jinglun
            Assignee: Jinglun
         Attachments: INode.proto, Main.java, NINode.proto

In fsimage.proto, the proto type of quota should be int64 rather than uint64. In proto, uint64 represents 64 bits unsinged intergers. Since quota in image could be -1, using uint64 is inappropriate.(see https://developers.google.com/protocol-buffers/docs/proto#scalar)
HDFS uses uint64 for quota and works fine because the java type corresponding to uint64 is long, the same as int64. But in c++ and go uint64 and int64 are mapping to different types. It would be a problem when loading an image with c++ and fsimage.proto.
The good news is we can simply change uint64 to int64 without breaking any existing clusters. The two types, int64 and uint64, are serialized to/deserialized from java long in the same way. Which means a long serialized to uint64 could be treated as int64 and deserialized to the same long value.
1)long a -> uint64 serialized -> byte[] b -> int64 deserialized -> long c;
2)a == c;
I did a test to show 1 & 2. INode.proto uses uint64 and NINode.proto uses int64. Main.java shows serializing long as uint64 to byte array and deserializing the array as int64 to long. Using proto2.5 to compile the proto files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org