You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by elton sky <el...@gmail.com> on 2010/06/26 06:27:58 UTC

Problem with calling FSDataOutputStream.sycn() ~

Hello,

I am trying some simple code snippet to create a new file. And after create
and write to the file, I want to use "sync()" to synchronize all replicas.
However, I got "LeaseExpiredException" in FSNameSystem.checkLease():
my code:
.
.
InputStream in=null;
        OutputStream out = null;
        try {
            in = new BufferedInputStream(new FileInputStream(src));

            FileSystem fs = FileSystem.get(URI.create(dest), conf);

            System.out.println(fs.getClass().getName());

            out = fs.create(new Path(dest), true);
            assert(fs.exists(new Path(dest)) == true);


            IOUtils.copyBytes(in, out, conf, true);

            ((FSDataOutputStream)out).flush();

            ((FSDataOutputStream)out).sync();*// Got Exception here*

            System.out.println(dest +" is created and synced
successfully.");

            printFileInfo(new Path(dest));

          } catch (IOException e) {
            IOUtils.closeStream(out);
            IOUtils.closeStream(in);
            throw e;
          }finally
          {
              IOUtils.closeStream(out);
              IOUtils.closeStream(in);
          }
.
.

Exception in thread "main" org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
/user/elton/test/file2 File is not open for writing. Holder
DFSClient_-925213311 does not have any open
files.

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1367)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1334)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:1857)

        at
org.apache.hadoop.hdfs.server.namenode.NameNode.fsync(NameNode.java:679)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at
java.lang.reflect.Method.invoke(Method.java:616)

        at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)

        at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)

        at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)

        at java.security.AccessController.doPrivileged(Native
Method)

        at
javax.security.auth.Subject.doAs(Subject.java:416)

        at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)


        at org.apache.hadoop.ipc.Client.call(Client.java:740)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.fsync(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at
java.lang.reflect.Method.invoke(Method.java:616)

        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

        at $Proxy0.fsync(Unknown
Source)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3141)

        at
org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)
.
.

I figure the reason is I got a  INodeFile obj in checkLease(), rather than a
INodeFileUnderConstruction
.
.
// make sure that we still have the lease on this file.
  private INodeFileUnderConstruction checkLease(String src, String holder)
                                                      throws IOException {

    INodeFile file = dir.getFileINode(src);

    checkLease(src, holder, file);

    return (INodeFileUnderConstruction)file;
  }
.
.
But how can this happen? Any idea?

Elton

Re: Problem with calling FSDataOutputStream.sycn() ~

Posted by elton sky <el...@gmail.com>.
Hey Ted,

>The line numbers don't match those from hadoop 0.20.2
>What version are you using ?

I am using 0.20.2. I add some extra LOG or print lines in between which
makes the line number different from usual~ But my modification are just
print out for debugging this problem.

>I don't see how conf is constructed, please show us what this line showed
>you:
>System.out.println(fs.
>>
>> getClass().getName());

I got "DistributedFileSystem". I printed this out to ensure I got hdfs
rather than local. And my conf is simply:
Configuration conf = new Configuration();

I can see the problem is from

 private INodeFileUnderConstruction checkLease(String src, String holder)
                                                      throws IOException {
    INodeFile file = dir.getFileINode(src); // *This line*
    checkLease(src, holder, file);
    return (INodeFileUnderConstruction)file;
  }

dir.getFileINode(src) should return me a INodeFileUnderConstruction rather
than INodeFile, otherwise I ll get exception in checkLease(src, holder,
file).

But how can this happen?

Re: Problem with calling FSDataOutputStream.sycn() ~

Posted by Ted Yu <yu...@gmail.com>.
I don't see how conf is constructed, please show us what this line showed
you:
System.out.println(fs.
>
> getClass().getName());



On Sat, Jun 26, 2010 at 8:59 AM, Ted Yu <yu...@gmail.com> wrote:

> The line numbers don't match those from hadoop 0.20.2
> What version are you using ?
>
> This is from Syncable interface:
>   /**
>    * Synchronize all buffer with the underlying devices.
>    * @throws IOException
>    */
>
> If you look at src/core/org/apache/hadoop/fs/RawLocalFileSystem.java where
> LocalFSFileOutputStream implements Syncable:
>     public void sync() throws IOException {
>       fos.getFD().sync();
>     }
> you would see that sync() is file level operation.
>
>
> On Fri, Jun 25, 2010 at 9:27 PM, elton sky <el...@gmail.com> wrote:
>
>> Hello,
>>
>> I am trying some simple code snippet to create a new file. And after
>> create
>> and write to the file, I want to use "sync()" to synchronize all replicas.
>> However, I got "LeaseExpiredException" in FSNameSystem.checkLease():
>> my code:
>> .
>> .
>> InputStream in=null;
>>        OutputStream out = null;
>>        try {
>>            in = new BufferedInputStream(new FileInputStream(src));
>>
>>            FileSystem fs = FileSystem.get(URI.create(dest), conf);
>>
>>            System.out.println(fs.getClass().getName());
>>
>>            out = fs.create(new Path(dest), true);
>>            assert(fs.exists(new Path(dest)) == true);
>>
>>
>>            IOUtils.copyBytes(in, out, conf, true);
>>
>>            ((FSDataOutputStream)out).flush();
>>
>>            ((FSDataOutputStream)out).sync();*// Got Exception here*
>>
>>            System.out.println(dest +" is created and synced
>> successfully.");
>>
>>            printFileInfo(new Path(dest));
>>
>>          } catch (IOException e) {
>>            IOUtils.closeStream(out);
>>            IOUtils.closeStream(in);
>>            throw e;
>>          }finally
>>          {
>>              IOUtils.closeStream(out);
>>              IOUtils.closeStream(in);
>>          }
>> .
>> .
>>
>> Exception in thread "main" org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
>> /user/elton/test/file2 File is not open for writing. Holder
>> DFSClient_-925213311 does not have any open
>> files.
>>
>>        at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1367)
>>
>>        at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1334)
>>
>>        at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:1857)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.fsync(NameNode.java:679)
>>
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>
>>        at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>        at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>        at
>> java.lang.reflect.Method.invoke(Method.java:616)
>>
>>        at
>> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>
>>        at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>
>>        at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>
>>        at java.security.AccessController.doPrivileged(Native
>> Method)
>>
>>        at
>> javax.security.auth.Subject.doAs(Subject.java:416)
>>
>>        at
>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>
>>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>        at $Proxy0.fsync(Unknown Source)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>        at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at
>> java.lang.reflect.Method.invoke(Method.java:616)
>>
>>        at
>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>        at
>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>
>>        at $Proxy0.fsync(Unknown
>> Source)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3141)
>>
>>        at
>> org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)
>> .
>> .
>>
>> I figure the reason is I got a  INodeFile obj in checkLease(), rather than
>> a
>> INodeFileUnderConstruction
>> .
>> .
>> // make sure that we still have the lease on this file.
>>  private INodeFileUnderConstruction checkLease(String src, String holder)
>>                                                      throws IOException {
>>
>>    INodeFile file = dir.getFileINode(src);
>>
>>    checkLease(src, holder, file);
>>
>>    return (INodeFileUnderConstruction)file;
>>  }
>> .
>> .
>> But how can this happen? Any idea?
>>
>> Elton
>>
>
>

Re: Problem with calling FSDataOutputStream.sycn() ~

Posted by Ted Yu <yu...@gmail.com>.
The line numbers don't match those from hadoop 0.20.2
What version are you using ?

This is from Syncable interface:
  /**
   * Synchronize all buffer with the underlying devices.
   * @throws IOException
   */

If you look at src/core/org/apache/hadoop/fs/RawLocalFileSystem.java where
LocalFSFileOutputStream implements Syncable:
    public void sync() throws IOException {
      fos.getFD().sync();
    }
you would see that sync() is file level operation.

On Fri, Jun 25, 2010 at 9:27 PM, elton sky <el...@gmail.com> wrote:

> Hello,
>
> I am trying some simple code snippet to create a new file. And after create
> and write to the file, I want to use "sync()" to synchronize all replicas.
> However, I got "LeaseExpiredException" in FSNameSystem.checkLease():
> my code:
> .
> .
> InputStream in=null;
>        OutputStream out = null;
>        try {
>            in = new BufferedInputStream(new FileInputStream(src));
>
>            FileSystem fs = FileSystem.get(URI.create(dest), conf);
>
>            System.out.println(fs.getClass().getName());
>
>            out = fs.create(new Path(dest), true);
>            assert(fs.exists(new Path(dest)) == true);
>
>
>            IOUtils.copyBytes(in, out, conf, true);
>
>            ((FSDataOutputStream)out).flush();
>
>            ((FSDataOutputStream)out).sync();*// Got Exception here*
>
>            System.out.println(dest +" is created and synced
> successfully.");
>
>            printFileInfo(new Path(dest));
>
>          } catch (IOException e) {
>            IOUtils.closeStream(out);
>            IOUtils.closeStream(in);
>            throw e;
>          }finally
>          {
>              IOUtils.closeStream(out);
>              IOUtils.closeStream(in);
>          }
> .
> .
>
> Exception in thread "main" org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /user/elton/test/file2 File is not open for writing. Holder
> DFSClient_-925213311 does not have any open
> files.
>
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1367)
>
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1334)
>
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:1857)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.fsync(NameNode.java:679)
>
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>        at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>        at
> java.lang.reflect.Method.invoke(Method.java:616)
>
>        at
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>
>        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>
>        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>
>        at java.security.AccessController.doPrivileged(Native
> Method)
>
>        at
> javax.security.auth.Subject.doAs(Subject.java:416)
>
>        at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>
>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy0.fsync(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at
> java.lang.reflect.Method.invoke(Method.java:616)
>
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>
>        at $Proxy0.fsync(Unknown
> Source)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3141)
>
>        at
> org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)
> .
> .
>
> I figure the reason is I got a  INodeFile obj in checkLease(), rather than
> a
> INodeFileUnderConstruction
> .
> .
> // make sure that we still have the lease on this file.
>  private INodeFileUnderConstruction checkLease(String src, String holder)
>                                                      throws IOException {
>
>    INodeFile file = dir.getFileINode(src);
>
>    checkLease(src, holder, file);
>
>    return (INodeFileUnderConstruction)file;
>  }
> .
> .
> But how can this happen? Any idea?
>
> Elton
>