You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Sriram Rao <sr...@gmail.com> on 2008/02/26 02:44:26 UTC
exception in hdfs_write (libhdfs)
Hi,
I am trying to run some of the binaries in libhdfs, in particular
hdfs_write. My setup:
- I am using hadoop-0.15.3
- I started namenode/datanode on the localhost
- I compiled libhdfs for an x86_64 setting
I invoke hdfs_write as:
hdfs_write foo.1 128000000 65536
When I run the hdfs_write test program, the program sees the following
exception:
java.io.IOException: Mismatch in writeChunk() args
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:1588)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:140)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:100)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:39)
at java.io.DataOutputStream.write(libgcj.so.8rh)
at java.io.FilterOutputStream.write(libgcj.so.8rh)
Call to org.apache.hadoop.fs.FSDataOutputStream::write failed!
When do "bin/hadoop dfs -ls", it shows the file foo.1 as being created.
What gives?
Sriram