You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by 尉雁磊 <tr...@163.com> on 2023/02/27 03:27:31 UTC

hdfs3.3.4 DowngradesHDFS2.7.2,namenode error:ArrayIndexOutOfBoundsException: 536870913

When I tested a rolling upgrade from hdfs2.7.2 to hdfs3.3.4, I got an error when I degraded the namenode:
java.lang.ArrayIndexOutOfBoundsException: 536870913
     at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadStringTableSection(FSImageFormatProtobuf.java:318)
     .....

The error method is:

private void loadStringTableSection(InputStream in) throws IOException {   

    StringTableSection s = StringTableSection.parseDelimitedFrom(in);    

    ctx.stringTable = new String[s.getNumEntry() + 1];    

    for (int i = 0; i < s.getNumEntry(); ++i) {   

        StringTableSection.Entry e = StringTableSection.Entry.parseDelimitedFrom(in);  

        ctx.stringTable[e.getId()] = e.getStr();   

    } }




Later I added some logging to the method:

private void loadStringTableSection(InputStream in) throws IOException {   

    StringTableSection s = StringTableSection.parseDelimitedFrom(in);    

    

LOG.info("s.getNumEntry"+s.getNumEntry());

    ctx.stringTable = new String[s.getNumEntry() + 1];    

    for (int i = 0; i < s.getNumEntry(); ++i) {   

        StringTableSection.Entry e = StringTableSection.Entry.parseDelimitedFrom(in);  

        LOG.info("e.getId():"+e.getId() +",e.getStr():"+e.getStr());

        ctx.stringTable[e.getId()] = e.getStr();   

    } }




The log will print as follows:

INFO namenode.FSImageFormatProtobuf: s.getNumEntry:12 INFO namenode.FSImageFormatProtobuf: e.getId():536870913,e.getStr():work INFO namenode.FSImageFormatProtobuf: e.getId():1073741825,e.getStr():supergroup INFO namenode.FSImageFormatProtobuf: e.getId():1610612737,e.getStr():hsm.block.storage.policy.id INFO namenode.FSImageFormatProtobuf: e.getId():536870914,e.getStr():yyl INFO namenode.FSImageFormatProtobuf: e.getId():1073741826,e.getStr():yyl INFO namenode.FSImageFormatProtobuf: e.getId():536870915,e.getStr():yarn INFO namenode.FSImageFormatProtobuf: e.getId():1073741827,e.getStr(): INFO namenode.FSImageFormatProtobuf: e.getId():536870916,e.getStr():hive INFO namenode.FSImageFormatProtobuf: e.getId():536870917,e.getStr():flume INFO namenode.FSImageFormatProtobuf: e.getId():536870918,e.getStr():hbase INFO namenode.FSImageFormatProtobuf: e.getId():536870919,e.getStr():anonymous INFO namenode.FSImageFormatProtobuf: e.getId():536870920,e.getStr():



Wonder why e.geid () is so large, s.getNumEntry is only 12, is this a bug?   I saw a similar problem description in the HDFS-13596 comment, which is to merge a commit(8a41edb089fbdedc5e7d9a2aeec63d126afea49f) to change the stringTable's type to map, 
Is the code in 2.7.2 broken