You are viewing a plain text version of this content. The canonical link for it is here.
Posted to server-dev@james.apache.org by Merve Temizer <me...@gmail.com> on 2012/11/09 14:59:17 UTC

Building trunk

Hello,

I had problems on hbase by building tag 3.0 beta,

then i checked out trunk from

http://svn.apache.org/repos/asf/james/server/trunk

I am using maven 3 to "mvn clean compile install"

Below is the console output:

Running org.apache.james.domainlist.hbase.HBaseDomainListTest
2012-11-09 15:47:07,044 [main] WARN
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name ugi
already exists!
2012-11-09 15:47:07,201 [main] WARN
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name ugi
already exists!
Starting DataNode 0 with dfs.data.dir:
/home/merve/dev/source/james2/trunk/data-hbase/target/test-data/b84d7dd5-91ad-4082-b02f-411efb69948b/dfscluster_d7a93b05-d4b6-4248-80c6-7f509a9a3446/dfs/data/data1,/home/merve/dev/source/james2/trunk/data-hbase/target/test-data/b84d7dd5-91ad-4082-b02f-411efb69948b/dfscluster_d7a93b05-d4b6-4248-80c6-7f509a9a3446/dfs/data/data2
2012-11-09 15:47:07,877 [main] WARN
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl - NameNode metrics
system already initialized!
2012-11-09 15:47:07,877 [main] WARN
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name ugi
already exists!
2012-11-09 15:47:08,233 [main] WARN
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name jvm
already exists!
2012-11-09 15:47:09,454 [IPC Server handler 2 on 28077] WARN
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem - Not able to place
enough replicas, still in need of 1
2012-11-09 15:47:09,455 [IPC Server handler 2 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root cause:java.io.IOException: File
/user/root/hbase.version could only be replicated to 0 nodes, instead of 1
2012-11-09 15:47:09,456 [Thread-44] WARN  org.apache.hadoop.hdfs.DFSClient
- DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
java.io.IOException: File /user/root/hbase.version could only be replicated
to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

at org.apache.hadoop.ipc.Client.call(Client.java:1066)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy8.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy8.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)

2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.DFSClient
- Error Recovery for block null bad datanode[0] nodes == null
2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.DFSClient
- Could not get block locations. Source file "/user/root/hbase.version" -
Aborting...
2012-11-09 15:47:09,462 [IPC Server handler 3 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:47:09,463 [IPC Server handler 3 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:50:09,486 [IPC Server handler 1 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:50:09,487 [IPC Server handler 1 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:51:09,495 [IPC Server handler 4 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:51:09,496 [IPC Server handler 4 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:52:09,521 [IPC Server handler 8 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:52:09,522 [IPC Server handler 8 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] ERROR
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root
cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
to create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.
2012-11-09 15:54:09,547 [IPC Server handler 4 on 28077] WARN
 org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
create file /user/root/hbase.version for DFSClient_1286943058 on client
127.0.0.1 because current leaseholder is trying to recreate file.

Re: Building trunk

Posted by Merve Temizer <me...@gmail.com>.
Thanks, i built without the test.

2012/11/11 Eric Charles <er...@apache.org>

> Well, this is probably not the cause, but as the test launches a
> minicluster, if this one is not correctly shutted down, the next step may
> fail. In the log you pasted, nothing indicates that the cause of the
> failure is such a remaining process, but did you paste the complete log?
>
> If you want to build without the test, add -DskipTests
>
> Thx, Eric
>
>
>
> On 11/11/2012 17:41, Merve Temizer wrote:
>
>> How can i be sure previous failures are shut down, there is no more
>> effects?
>>
>> With what other application might hbase started?
>>
>>
>> 2012/11/9 Eric Charles <er...@apache.org>
>>
>>  It builds/tests fine on the Apache CI [1] and on my laptop (at least last
>>> week, for now I have an unstable local trunk, so I can not confirm).
>>>
>>> Can you double check if you don't have any hadoop/hbase/zookeeper process
>>> running (wherever because you have launched them or because of a previous
>>> test failure) and try again (numerous time if needed).
>>>
>>> Thx, Eric
>>>
>>> [1] https://builds.apache.org/****view/G-L/view/James/job/**<https://builds.apache.org/**view/G-L/view/James/job/**>
>>> mailbox/1109/org.apache.james$****apache-james-mailbox-hbase/<**
>>> https://builds.apache.org/**view/G-L/view/James/job/**
>>> mailbox/1109/org.apache.james$**apache-james-mailbox-hbase/<https://builds.apache.org/view/G-L/view/James/job/mailbox/1109/org.apache.james$apache-james-mailbox-hbase/>
>>> >
>>>
>>>
>>>
>>>
>>>
>>> On 09/11/2012 16:10, Merve Temizer wrote:
>>>
>>>  os: Ubuntu 12.04 lts
>>>>
>>>> i use a jdk by specifying on command
>>>>
>>>> sudo JAVA_HOME /home/merve/dev/jdk/jdk1.7.0_****03 mvn clean compile
>>>>
>>>> install
>>>>
>>>> "mvn --v" outputs
>>>>
>>>> Apache Maven 3.0.4 (r1232337; 2012-01-17 10:44:56+0200)
>>>> Maven home: /usr/local/apache-maven-3.0.4
>>>> Java version: 1.7.0_03, vendor: Oracle Corporation
>>>> Java home: /usr/lib/jvm/jdk1.7.0_03/jre
>>>> Default locale: en_US, platform encoding: UTF-8
>>>> OS name: "linux", version: "3.0.0-16-generic", arch: "i386", family:
>>>> "unix"
>>>>
>>>> thanks very much for time.
>>>>
>>>> 2012/11/9 Eric Charles <er...@apache.org>
>>>>
>>>>   Hi Merve,
>>>>
>>>>> Can you send env details: os and jdk version?
>>>>>
>>>>> Thx, Eric
>>>>>
>>>>>
>>>>> On 09/11/2012 13:59, Merve Temizer wrote:
>>>>>
>>>>>   Hello,
>>>>>
>>>>>>
>>>>>> I had problems on hbase by building tag 3.0 beta,
>>>>>>
>>>>>> then i checked out trunk from
>>>>>>
>>>>>> http://svn.apache.org/repos/******asf/james/server/trunk<http://svn.apache.org/repos/****asf/james/server/trunk>
>>>>>> <http:**//svn.apache.org/repos/**asf/**james/server/trunk<http://svn.apache.org/repos/**asf/james/server/trunk>
>>>>>> >
>>>>>> <http://**svn.apache.org/**repos/asf/**james/server/trunk<http://svn.apache.org/repos/asf/**james/server/trunk>
>>>>>> **<http://svn.apache.org/repos/**asf/james/server/trunk<http://svn.apache.org/repos/asf/james/server/trunk>
>>>>>> >
>>>>>>
>>>>>>
>>>>>>>
>>>>>>
>>>>>> I am using maven 3 to "mvn clean compile install"
>>>>>>
>>>>>> Below is the console output:
>>>>>>
>>>>>> Running org.apache.james.domainlist.******hbase.HBaseDomainListTest
>>>>>>
>>>>>>
>>>>>> 2012-11-09 15:47:07,044 [main] WARN
>>>>>>     org.apache.hadoop.metrics2.******impl.MetricsSystemImpl - Source
>>>>>> name
>>>>>>
>>>>>> ugi
>>>>>>
>>>>>> already exists!
>>>>>> 2012-11-09 15:47:07,201 [main] WARN
>>>>>>     org.apache.hadoop.metrics2.******impl.MetricsSystemImpl - Source
>>>>>> name
>>>>>>
>>>>>> ugi
>>>>>>
>>>>>> already exists!
>>>>>> Starting DataNode 0 with dfs.data.dir:
>>>>>> /home/merve/dev/source/james2/******trunk/data-hbase/target/****
>>>>>> test-**
>>>>>> data/b84d7dd5-91ad-4082-b02f-******411efb69948b/dfscluster_**
>>>>>> d7a93b05-d4b6-4248-80c6-******7f509a9a3446/dfs/data/data1,/******
>>>>>> home/merve/dev/source/james2/******trunk/data-hbase/target/**
>>>>>> test-****
>>>>>> data/b84d7dd5-91ad-4082-b02f-******411efb69948b/dfscluster_**
>>>>>> d7a93b05-d4b6-4248-80c6-******7f509a9a3446/dfs/data/data2
>>>>>>
>>>>>>
>>>>>> 2012-11-09 15:47:07,877 [main] WARN
>>>>>>     org.apache.hadoop.metrics2.******impl.MetricsSystemImpl -
>>>>>> NameNode
>>>>>>
>>>>>> metrics
>>>>>>
>>>>>> system already initialized!
>>>>>> 2012-11-09 15:47:07,877 [main] WARN
>>>>>>     org.apache.hadoop.metrics2.******impl.MetricsSystemImpl - Source
>>>>>> name
>>>>>>
>>>>>> ugi
>>>>>>
>>>>>> already exists!
>>>>>> 2012-11-09 15:47:08,233 [main] WARN
>>>>>>     org.apache.hadoop.metrics2.******impl.MetricsSystemImpl - Source
>>>>>> name
>>>>>>
>>>>>> jvm
>>>>>>
>>>>>> already exists!
>>>>>> 2012-11-09 15:47:09,454 [IPC Server handler 2 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.server.******namenode.FSNamesystem - Not
>>>>>> able
>>>>>>
>>>>>> to
>>>>>>
>>>>>> place
>>>>>> enough replicas, still in need of 1
>>>>>> 2012-11-09 15:47:09,455 [IPC Server handler 2 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>>
>>>>>>
>>>>>> PriviledgedActionException as:root cause:java.io.IOException: File
>>>>>> /user/root/hbase.version could only be replicated to 0 nodes, instead
>>>>>> of 1
>>>>>> 2012-11-09 15:47:09,456 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>>> DFSClient
>>>>>> - DataStreamer Exception: org.apache.hadoop.ipc.******
>>>>>> RemoteException:
>>>>>>
>>>>>>
>>>>>> java.io.IOException: File /user/root/hbase.version could only be
>>>>>> replicated
>>>>>> to 0 nodes, instead of 1
>>>>>> at
>>>>>> org.apache.hadoop.hdfs.server.******namenode.FSNamesystem.**
>>>>>> getAdditionalBlock(******FSNamesystem.java:1556)
>>>>>> at
>>>>>> org.apache.hadoop.hdfs.server.******namenode.NameNode.**addBlock(****
>>>>>> NameNode.java:696)
>>>>>> at sun.reflect.******NativeMethodAccessorImpl.******invoke0(Native
>>>>>> Method)
>>>>>> at
>>>>>> sun.reflect.******NativeMethodAccessorImpl.******invoke(**
>>>>>> NativeMethodAccessorImpl.java:******57)
>>>>>> at
>>>>>> sun.reflect.******DelegatingMethodAccessorImpl.******invoke(**
>>>>>> DelegatingMethodAccessorImpl.******java:43)
>>>>>> at java.lang.reflect.Method.******invoke(Method.java:601)
>>>>>> at org.apache.hadoop.ipc.RPC$******Server.call(RPC.java:563)
>>>>>> at org.apache.hadoop.ipc.Server$******Handler$1.run(Server.java:***
>>>>>> ***1388)
>>>>>> at org.apache.hadoop.ipc.Server$******Handler$1.run(Server.java:***
>>>>>> ***1384)
>>>>>> at java.security.******AccessController.doPrivileged(******Native
>>>>>> Method)
>>>>>> at javax.security.auth.Subject.******doAs(Subject.java:415)
>>>>>> at
>>>>>> org.apache.hadoop.security.******UserGroupInformation.doAs(**
>>>>>> UserGroupInformation.java:******1093)
>>>>>> at org.apache.hadoop.ipc.Server$******Handler.run(Server.java:**1382)
>>>>>>
>>>>>> at org.apache.hadoop.ipc.Client.******call(Client.java:1066)
>>>>>> at org.apache.hadoop.ipc.RPC$******Invoker.invoke(RPC.java:225)
>>>>>> at $Proxy8.addBlock(Unknown Source)
>>>>>> at sun.reflect.******NativeMethodAccessorImpl.******invoke0(Native
>>>>>> Method)
>>>>>> at
>>>>>> sun.reflect.******NativeMethodAccessorImpl.******invoke(**
>>>>>> NativeMethodAccessorImpl.java:******57)
>>>>>> at
>>>>>> sun.reflect.******DelegatingMethodAccessorImpl.******invoke(**
>>>>>> DelegatingMethodAccessorImpl.******java:43)
>>>>>> at java.lang.reflect.Method.******invoke(Method.java:601)
>>>>>> at
>>>>>> org.apache.hadoop.io.retry.******RetryInvocationHandler.****
>>>>>> invokeMethod(**
>>>>>> RetryInvocationHandler.java:******82)
>>>>>> at
>>>>>> org.apache.hadoop.io.retry.******RetryInvocationHandler.**invoke(****
>>>>>> RetryInvocationHandler.java:******59)
>>>>>> at $Proxy8.addBlock(Unknown Source)
>>>>>> at
>>>>>> org.apache.hadoop.hdfs.******DFSClient$DFSOutputStream.**
>>>>>> locateFollowingBlock(******DFSClient.java:3507)
>>>>>> at
>>>>>> org.apache.hadoop.hdfs.******DFSClient$DFSOutputStream.**
>>>>>> nextBlockOutputStream(******DFSClient.java:3370)
>>>>>> at
>>>>>> org.apache.hadoop.hdfs.******DFSClient$DFSOutputStream.**
>>>>>> access$2700(DFSClient.java:******2586)
>>>>>> at
>>>>>> org.apache.hadoop.hdfs.******DFSClient$DFSOutputStream$**
>>>>>> DataStreamer.run(DFSClient.******java:2826)
>>>>>>
>>>>>>
>>>>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>>>
>>>>>> DFSClient
>>>>>> - Error Recovery for block null bad datanode[0] nodes == null
>>>>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>>>
>>>>>> DFSClient
>>>>>> - Could not get block locations. Source file
>>>>>> "/user/root/hbase.version"
>>>>>> -
>>>>>> Aborting...
>>>>>> 2012-11-09 15:47:09,462 [IPC Server handler 3 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:47:09,463 [IPC Server handler 3 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:50:09,486 [IPC Server handler 1 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:50:09,487 [IPC Server handler 1 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:51:09,495 [IPC Server handler 4 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:51:09,496 [IPC Server handler 4 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:52:09,521 [IPC Server handler 8 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:52:09,522 [IPC Server handler 8 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] ERROR
>>>>>> org.apache.hadoop.security.******UserGroupInformation -
>>>>>> PriviledgedActionException as:root
>>>>>> cause:org.apache.hadoop.hdfs.******protocol.****
>>>>>>
>>>>>> AlreadyBeingCreatedException:
>>>>>>
>>>>>> failed
>>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>> 2012-11-09 15:54:09,547 [IPC Server handler 4 on 28077] WARN
>>>>>>     org.apache.hadoop.hdfs.******StateChange - DIR*
>>>>>> NameSystem.startFile:
>>>>>>
>>>>>>
>>>>>> failed to
>>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>>> client
>>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>>
>>>>>>
>>>>>>   ------------------------------******--------------------------**
>>>>>> --**
>>>>>>
>>>>> --**---------
>>>>> To unsubscribe, e-mail: server-dev-unsubscribe@james.******apache.org<
>>>>> server-dev-**unsubscribe@**james.apache.org<un...@james.apache.org>
>>>>> <se...@james.apache.org>
>>>>> >
>>>>>
>>>>>>
>>>>>>  For additional commands, e-mail: server-dev-help@james.apache.***
>>>>> ***org<
>>>>> server-dev-help@james.**apache**.org <http://apache.org> <
>>>>> server-dev-help@james.apache.**org <se...@james.apache.org>
>>>>> >>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>  ------------------------------****----------------------------**
>>> --**---------
>>> To unsubscribe, e-mail: server-dev-unsubscribe@james.****apache.org<
>>> server-dev-**unsubscribe@james.apache.org<se...@james.apache.org>
>>> >
>>> For additional commands, e-mail: server-dev-help@james.apache.****org<
>>> server-dev-help@james.**apache.org <se...@james.apache.org>>
>>>
>>>
>>>
>>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: server-dev-unsubscribe@james.**apache.org<se...@james.apache.org>
> For additional commands, e-mail: server-dev-help@james.apache.**org<se...@james.apache.org>
>
>

Re: Building trunk

Posted by Eric Charles <er...@apache.org>.
Well, this is probably not the cause, but as the test launches a 
minicluster, if this one is not correctly shutted down, the next step 
may fail. In the log you pasted, nothing indicates that the cause of the 
failure is such a remaining process, but did you paste the complete log?

If you want to build without the test, add -DskipTests

Thx, Eric


On 11/11/2012 17:41, Merve Temizer wrote:
> How can i be sure previous failures are shut down, there is no more effects?
>
> With what other application might hbase started?
>
>
> 2012/11/9 Eric Charles <er...@apache.org>
>
>> It builds/tests fine on the Apache CI [1] and on my laptop (at least last
>> week, for now I have an unstable local trunk, so I can not confirm).
>>
>> Can you double check if you don't have any hadoop/hbase/zookeeper process
>> running (wherever because you have launched them or because of a previous
>> test failure) and try again (numerous time if needed).
>>
>> Thx, Eric
>>
>> [1] https://builds.apache.org/**view/G-L/view/James/job/**
>> mailbox/1109/org.apache.james$**apache-james-mailbox-hbase/<https://builds.apache.org/view/G-L/view/James/job/mailbox/1109/org.apache.james$apache-james-mailbox-hbase/>
>>
>>
>>
>>
>> On 09/11/2012 16:10, Merve Temizer wrote:
>>
>>> os: Ubuntu 12.04 lts
>>>
>>> i use a jdk by specifying on command
>>>
>>> sudo JAVA_HOME /home/merve/dev/jdk/jdk1.7.0_**03 mvn clean compile
>>> install
>>>
>>> "mvn --v" outputs
>>>
>>> Apache Maven 3.0.4 (r1232337; 2012-01-17 10:44:56+0200)
>>> Maven home: /usr/local/apache-maven-3.0.4
>>> Java version: 1.7.0_03, vendor: Oracle Corporation
>>> Java home: /usr/lib/jvm/jdk1.7.0_03/jre
>>> Default locale: en_US, platform encoding: UTF-8
>>> OS name: "linux", version: "3.0.0-16-generic", arch: "i386", family:
>>> "unix"
>>>
>>> thanks very much for time.
>>>
>>> 2012/11/9 Eric Charles <er...@apache.org>
>>>
>>>   Hi Merve,
>>>> Can you send env details: os and jdk version?
>>>>
>>>> Thx, Eric
>>>>
>>>>
>>>> On 09/11/2012 13:59, Merve Temizer wrote:
>>>>
>>>>   Hello,
>>>>>
>>>>> I had problems on hbase by building tag 3.0 beta,
>>>>>
>>>>> then i checked out trunk from
>>>>>
>>>>> http://svn.apache.org/repos/****asf/james/server/trunk<http://svn.apache.org/repos/**asf/james/server/trunk>
>>>>> <http://**svn.apache.org/repos/asf/**james/server/trunk<http://svn.apache.org/repos/asf/james/server/trunk>
>>>>>>
>>>>>
>>>>>
>>>>> I am using maven 3 to "mvn clean compile install"
>>>>>
>>>>> Below is the console output:
>>>>>
>>>>> Running org.apache.james.domainlist.****hbase.HBaseDomainListTest
>>>>>
>>>>> 2012-11-09 15:47:07,044 [main] WARN
>>>>>     org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>>> ugi
>>>>>
>>>>> already exists!
>>>>> 2012-11-09 15:47:07,201 [main] WARN
>>>>>     org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>>> ugi
>>>>>
>>>>> already exists!
>>>>> Starting DataNode 0 with dfs.data.dir:
>>>>> /home/merve/dev/source/james2/****trunk/data-hbase/target/**test-**
>>>>> data/b84d7dd5-91ad-4082-b02f-****411efb69948b/dfscluster_**
>>>>> d7a93b05-d4b6-4248-80c6-****7f509a9a3446/dfs/data/data1,/****
>>>>> home/merve/dev/source/james2/****trunk/data-hbase/target/test-****
>>>>> data/b84d7dd5-91ad-4082-b02f-****411efb69948b/dfscluster_**
>>>>> d7a93b05-d4b6-4248-80c6-****7f509a9a3446/dfs/data/data2
>>>>>
>>>>> 2012-11-09 15:47:07,877 [main] WARN
>>>>>     org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - NameNode
>>>>> metrics
>>>>>
>>>>> system already initialized!
>>>>> 2012-11-09 15:47:07,877 [main] WARN
>>>>>     org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>>> ugi
>>>>>
>>>>> already exists!
>>>>> 2012-11-09 15:47:08,233 [main] WARN
>>>>>     org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>>> jvm
>>>>>
>>>>> already exists!
>>>>> 2012-11-09 15:47:09,454 [IPC Server handler 2 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.server.****namenode.FSNamesystem - Not able
>>>>> to
>>>>>
>>>>> place
>>>>> enough replicas, still in need of 1
>>>>> 2012-11-09 15:47:09,455 [IPC Server handler 2 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>>
>>>>> PriviledgedActionException as:root cause:java.io.IOException: File
>>>>> /user/root/hbase.version could only be replicated to 0 nodes, instead
>>>>> of 1
>>>>> 2012-11-09 15:47:09,456 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>> DFSClient
>>>>> - DataStreamer Exception: org.apache.hadoop.ipc.****RemoteException:
>>>>>
>>>>> java.io.IOException: File /user/root/hbase.version could only be
>>>>> replicated
>>>>> to 0 nodes, instead of 1
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**
>>>>> getAdditionalBlock(****FSNamesystem.java:1556)
>>>>> at
>>>>> org.apache.hadoop.hdfs.server.****namenode.NameNode.addBlock(****
>>>>> NameNode.java:696)
>>>>> at sun.reflect.****NativeMethodAccessorImpl.****invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.****NativeMethodAccessorImpl.****invoke(**
>>>>> NativeMethodAccessorImpl.java:****57)
>>>>> at
>>>>> sun.reflect.****DelegatingMethodAccessorImpl.****invoke(**
>>>>> DelegatingMethodAccessorImpl.****java:43)
>>>>> at java.lang.reflect.Method.****invoke(Method.java:601)
>>>>> at org.apache.hadoop.ipc.RPC$****Server.call(RPC.java:563)
>>>>> at org.apache.hadoop.ipc.Server$****Handler$1.run(Server.java:****1388)
>>>>> at org.apache.hadoop.ipc.Server$****Handler$1.run(Server.java:****1384)
>>>>> at java.security.****AccessController.doPrivileged(****Native Method)
>>>>> at javax.security.auth.Subject.****doAs(Subject.java:415)
>>>>> at
>>>>> org.apache.hadoop.security.****UserGroupInformation.doAs(**
>>>>> UserGroupInformation.java:****1093)
>>>>> at org.apache.hadoop.ipc.Server$****Handler.run(Server.java:1382)
>>>>>
>>>>> at org.apache.hadoop.ipc.Client.****call(Client.java:1066)
>>>>> at org.apache.hadoop.ipc.RPC$****Invoker.invoke(RPC.java:225)
>>>>> at $Proxy8.addBlock(Unknown Source)
>>>>> at sun.reflect.****NativeMethodAccessorImpl.****invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.****NativeMethodAccessorImpl.****invoke(**
>>>>> NativeMethodAccessorImpl.java:****57)
>>>>> at
>>>>> sun.reflect.****DelegatingMethodAccessorImpl.****invoke(**
>>>>> DelegatingMethodAccessorImpl.****java:43)
>>>>> at java.lang.reflect.Method.****invoke(Method.java:601)
>>>>> at
>>>>> org.apache.hadoop.io.retry.****RetryInvocationHandler.****
>>>>> invokeMethod(**
>>>>> RetryInvocationHandler.java:****82)
>>>>> at
>>>>> org.apache.hadoop.io.retry.****RetryInvocationHandler.invoke(****
>>>>> RetryInvocationHandler.java:****59)
>>>>> at $Proxy8.addBlock(Unknown Source)
>>>>> at
>>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream.**
>>>>> locateFollowingBlock(****DFSClient.java:3507)
>>>>> at
>>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream.**
>>>>> nextBlockOutputStream(****DFSClient.java:3370)
>>>>> at
>>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream.**
>>>>> access$2700(DFSClient.java:****2586)
>>>>> at
>>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream$**
>>>>> DataStreamer.run(DFSClient.****java:2826)
>>>>>
>>>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>>
>>>>> DFSClient
>>>>> - Error Recovery for block null bad datanode[0] nodes == null
>>>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>>
>>>>> DFSClient
>>>>> - Could not get block locations. Source file "/user/root/hbase.version"
>>>>> -
>>>>> Aborting...
>>>>> 2012-11-09 15:47:09,462 [IPC Server handler 3 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:47:09,463 [IPC Server handler 3 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:50:09,486 [IPC Server handler 1 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:50:09,487 [IPC Server handler 1 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:51:09,495 [IPC Server handler 4 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:51:09,496 [IPC Server handler 4 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:52:09,521 [IPC Server handler 8 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:52:09,522 [IPC Server handler 8 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] ERROR
>>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>> PriviledgedActionException as:root
>>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>>> AlreadyBeingCreatedException:
>>>>>
>>>>> failed
>>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>>> client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>> 2012-11-09 15:54:09,547 [IPC Server handler 4 on 28077] WARN
>>>>>     org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>>
>>>>> failed to
>>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>>
>>>>>
>>>>>   ------------------------------****----------------------------**
>>>> --**---------
>>>> To unsubscribe, e-mail: server-dev-unsubscribe@james.****apache.org<
>>>> server-dev-**unsubscribe@james.apache.org<se...@james.apache.org>
>>>>>
>>>> For additional commands, e-mail: server-dev-help@james.apache.****org<
>>>> server-dev-help@james.**apache.org <se...@james.apache.org>>
>>>>
>>>>
>>>>
>>>
>> ------------------------------**------------------------------**---------
>> To unsubscribe, e-mail: server-dev-unsubscribe@james.**apache.org<se...@james.apache.org>
>> For additional commands, e-mail: server-dev-help@james.apache.**org<se...@james.apache.org>
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org


Re: Building trunk

Posted by Merve Temizer <me...@gmail.com>.
How can i be sure previous failures are shut down, there is no more effects?

With what other application might hbase started?


2012/11/9 Eric Charles <er...@apache.org>

> It builds/tests fine on the Apache CI [1] and on my laptop (at least last
> week, for now I have an unstable local trunk, so I can not confirm).
>
> Can you double check if you don't have any hadoop/hbase/zookeeper process
> running (wherever because you have launched them or because of a previous
> test failure) and try again (numerous time if needed).
>
> Thx, Eric
>
> [1] https://builds.apache.org/**view/G-L/view/James/job/**
> mailbox/1109/org.apache.james$**apache-james-mailbox-hbase/<https://builds.apache.org/view/G-L/view/James/job/mailbox/1109/org.apache.james$apache-james-mailbox-hbase/>
>
>
>
>
> On 09/11/2012 16:10, Merve Temizer wrote:
>
>> os: Ubuntu 12.04 lts
>>
>> i use a jdk by specifying on command
>>
>> sudo JAVA_HOME /home/merve/dev/jdk/jdk1.7.0_**03 mvn clean compile
>> install
>>
>> "mvn --v" outputs
>>
>> Apache Maven 3.0.4 (r1232337; 2012-01-17 10:44:56+0200)
>> Maven home: /usr/local/apache-maven-3.0.4
>> Java version: 1.7.0_03, vendor: Oracle Corporation
>> Java home: /usr/lib/jvm/jdk1.7.0_03/jre
>> Default locale: en_US, platform encoding: UTF-8
>> OS name: "linux", version: "3.0.0-16-generic", arch: "i386", family:
>> "unix"
>>
>> thanks very much for time.
>>
>> 2012/11/9 Eric Charles <er...@apache.org>
>>
>>  Hi Merve,
>>> Can you send env details: os and jdk version?
>>>
>>> Thx, Eric
>>>
>>>
>>> On 09/11/2012 13:59, Merve Temizer wrote:
>>>
>>>  Hello,
>>>>
>>>> I had problems on hbase by building tag 3.0 beta,
>>>>
>>>> then i checked out trunk from
>>>>
>>>> http://svn.apache.org/repos/****asf/james/server/trunk<http://svn.apache.org/repos/**asf/james/server/trunk>
>>>> <http://**svn.apache.org/repos/asf/**james/server/trunk<http://svn.apache.org/repos/asf/james/server/trunk>
>>>> >
>>>>
>>>>
>>>> I am using maven 3 to "mvn clean compile install"
>>>>
>>>> Below is the console output:
>>>>
>>>> Running org.apache.james.domainlist.****hbase.HBaseDomainListTest
>>>>
>>>> 2012-11-09 15:47:07,044 [main] WARN
>>>>    org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>> ugi
>>>>
>>>> already exists!
>>>> 2012-11-09 15:47:07,201 [main] WARN
>>>>    org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>> ugi
>>>>
>>>> already exists!
>>>> Starting DataNode 0 with dfs.data.dir:
>>>> /home/merve/dev/source/james2/****trunk/data-hbase/target/**test-**
>>>> data/b84d7dd5-91ad-4082-b02f-****411efb69948b/dfscluster_**
>>>> d7a93b05-d4b6-4248-80c6-****7f509a9a3446/dfs/data/data1,/****
>>>> home/merve/dev/source/james2/****trunk/data-hbase/target/test-****
>>>> data/b84d7dd5-91ad-4082-b02f-****411efb69948b/dfscluster_**
>>>> d7a93b05-d4b6-4248-80c6-****7f509a9a3446/dfs/data/data2
>>>>
>>>> 2012-11-09 15:47:07,877 [main] WARN
>>>>    org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - NameNode
>>>> metrics
>>>>
>>>> system already initialized!
>>>> 2012-11-09 15:47:07,877 [main] WARN
>>>>    org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>> ugi
>>>>
>>>> already exists!
>>>> 2012-11-09 15:47:08,233 [main] WARN
>>>>    org.apache.hadoop.metrics2.****impl.MetricsSystemImpl - Source name
>>>> jvm
>>>>
>>>> already exists!
>>>> 2012-11-09 15:47:09,454 [IPC Server handler 2 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.server.****namenode.FSNamesystem - Not able
>>>> to
>>>>
>>>> place
>>>> enough replicas, still in need of 1
>>>> 2012-11-09 15:47:09,455 [IPC Server handler 2 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>>
>>>> PriviledgedActionException as:root cause:java.io.IOException: File
>>>> /user/root/hbase.version could only be replicated to 0 nodes, instead
>>>> of 1
>>>> 2012-11-09 15:47:09,456 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>> DFSClient
>>>> - DataStreamer Exception: org.apache.hadoop.ipc.****RemoteException:
>>>>
>>>> java.io.IOException: File /user/root/hbase.version could only be
>>>> replicated
>>>> to 0 nodes, instead of 1
>>>> at
>>>> org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**
>>>> getAdditionalBlock(****FSNamesystem.java:1556)
>>>> at
>>>> org.apache.hadoop.hdfs.server.****namenode.NameNode.addBlock(****
>>>> NameNode.java:696)
>>>> at sun.reflect.****NativeMethodAccessorImpl.****invoke0(Native Method)
>>>> at
>>>> sun.reflect.****NativeMethodAccessorImpl.****invoke(**
>>>> NativeMethodAccessorImpl.java:****57)
>>>> at
>>>> sun.reflect.****DelegatingMethodAccessorImpl.****invoke(**
>>>> DelegatingMethodAccessorImpl.****java:43)
>>>> at java.lang.reflect.Method.****invoke(Method.java:601)
>>>> at org.apache.hadoop.ipc.RPC$****Server.call(RPC.java:563)
>>>> at org.apache.hadoop.ipc.Server$****Handler$1.run(Server.java:****1388)
>>>> at org.apache.hadoop.ipc.Server$****Handler$1.run(Server.java:****1384)
>>>> at java.security.****AccessController.doPrivileged(****Native Method)
>>>> at javax.security.auth.Subject.****doAs(Subject.java:415)
>>>> at
>>>> org.apache.hadoop.security.****UserGroupInformation.doAs(**
>>>> UserGroupInformation.java:****1093)
>>>> at org.apache.hadoop.ipc.Server$****Handler.run(Server.java:1382)
>>>>
>>>> at org.apache.hadoop.ipc.Client.****call(Client.java:1066)
>>>> at org.apache.hadoop.ipc.RPC$****Invoker.invoke(RPC.java:225)
>>>> at $Proxy8.addBlock(Unknown Source)
>>>> at sun.reflect.****NativeMethodAccessorImpl.****invoke0(Native Method)
>>>> at
>>>> sun.reflect.****NativeMethodAccessorImpl.****invoke(**
>>>> NativeMethodAccessorImpl.java:****57)
>>>> at
>>>> sun.reflect.****DelegatingMethodAccessorImpl.****invoke(**
>>>> DelegatingMethodAccessorImpl.****java:43)
>>>> at java.lang.reflect.Method.****invoke(Method.java:601)
>>>> at
>>>> org.apache.hadoop.io.retry.****RetryInvocationHandler.****
>>>> invokeMethod(**
>>>> RetryInvocationHandler.java:****82)
>>>> at
>>>> org.apache.hadoop.io.retry.****RetryInvocationHandler.invoke(****
>>>> RetryInvocationHandler.java:****59)
>>>> at $Proxy8.addBlock(Unknown Source)
>>>> at
>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream.**
>>>> locateFollowingBlock(****DFSClient.java:3507)
>>>> at
>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream.**
>>>> nextBlockOutputStream(****DFSClient.java:3370)
>>>> at
>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream.**
>>>> access$2700(DFSClient.java:****2586)
>>>> at
>>>> org.apache.hadoop.hdfs.****DFSClient$DFSOutputStream$**
>>>> DataStreamer.run(DFSClient.****java:2826)
>>>>
>>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>
>>>> DFSClient
>>>> - Error Recovery for block null bad datanode[0] nodes == null
>>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>>>
>>>> DFSClient
>>>> - Could not get block locations. Source file "/user/root/hbase.version"
>>>> -
>>>> Aborting...
>>>> 2012-11-09 15:47:09,462 [IPC Server handler 3 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:47:09,463 [IPC Server handler 3 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:50:09,486 [IPC Server handler 1 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:50:09,487 [IPC Server handler 1 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:51:09,495 [IPC Server handler 4 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:51:09,496 [IPC Server handler 4 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:52:09,521 [IPC Server handler 8 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:52:09,522 [IPC Server handler 8 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] ERROR
>>>> org.apache.hadoop.security.****UserGroupInformation -
>>>> PriviledgedActionException as:root
>>>> cause:org.apache.hadoop.hdfs.****protocol.****
>>>> AlreadyBeingCreatedException:
>>>>
>>>> failed
>>>> to create file /user/root/hbase.version for DFSClient_1286943058 on
>>>> client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>> 2012-11-09 15:54:09,547 [IPC Server handler 4 on 28077] WARN
>>>>    org.apache.hadoop.hdfs.****StateChange - DIR* NameSystem.startFile:
>>>>
>>>> failed to
>>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>>
>>>>
>>>>  ------------------------------****----------------------------**
>>> --**---------
>>> To unsubscribe, e-mail: server-dev-unsubscribe@james.****apache.org<
>>> server-dev-**unsubscribe@james.apache.org<se...@james.apache.org>
>>> >
>>> For additional commands, e-mail: server-dev-help@james.apache.****org<
>>> server-dev-help@james.**apache.org <se...@james.apache.org>>
>>>
>>>
>>>
>>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: server-dev-unsubscribe@james.**apache.org<se...@james.apache.org>
> For additional commands, e-mail: server-dev-help@james.apache.**org<se...@james.apache.org>
>
>

Re: Building trunk

Posted by Eric Charles <er...@apache.org>.
It builds/tests fine on the Apache CI [1] and on my laptop (at least 
last week, for now I have an unstable local trunk, so I can not confirm).

Can you double check if you don't have any hadoop/hbase/zookeeper 
process running (wherever because you have launched them or because of a 
previous test failure) and try again (numerous time if needed).

Thx, Eric

[1] 
https://builds.apache.org/view/G-L/view/James/job/mailbox/1109/org.apache.james$apache-james-mailbox-hbase/



On 09/11/2012 16:10, Merve Temizer wrote:
> os: Ubuntu 12.04 lts
>
> i use a jdk by specifying on command
>
> sudo JAVA_HOME /home/merve/dev/jdk/jdk1.7.0_03 mvn clean compile install
>
> "mvn --v" outputs
>
> Apache Maven 3.0.4 (r1232337; 2012-01-17 10:44:56+0200)
> Maven home: /usr/local/apache-maven-3.0.4
> Java version: 1.7.0_03, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/jdk1.7.0_03/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.0.0-16-generic", arch: "i386", family: "unix"
>
> thanks very much for time.
>
> 2012/11/9 Eric Charles <er...@apache.org>
>
>> Hi Merve,
>> Can you send env details: os and jdk version?
>>
>> Thx, Eric
>>
>>
>> On 09/11/2012 13:59, Merve Temizer wrote:
>>
>>> Hello,
>>>
>>> I had problems on hbase by building tag 3.0 beta,
>>>
>>> then i checked out trunk from
>>>
>>> http://svn.apache.org/repos/**asf/james/server/trunk<http://svn.apache.org/repos/asf/james/server/trunk>
>>>
>>> I am using maven 3 to "mvn clean compile install"
>>>
>>> Below is the console output:
>>>
>>> Running org.apache.james.domainlist.**hbase.HBaseDomainListTest
>>> 2012-11-09 15:47:07,044 [main] WARN
>>>    org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name ugi
>>> already exists!
>>> 2012-11-09 15:47:07,201 [main] WARN
>>>    org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name ugi
>>> already exists!
>>> Starting DataNode 0 with dfs.data.dir:
>>> /home/merve/dev/source/james2/**trunk/data-hbase/target/test-**
>>> data/b84d7dd5-91ad-4082-b02f-**411efb69948b/dfscluster_**
>>> d7a93b05-d4b6-4248-80c6-**7f509a9a3446/dfs/data/data1,/**
>>> home/merve/dev/source/james2/**trunk/data-hbase/target/test-**
>>> data/b84d7dd5-91ad-4082-b02f-**411efb69948b/dfscluster_**
>>> d7a93b05-d4b6-4248-80c6-**7f509a9a3446/dfs/data/data2
>>> 2012-11-09 15:47:07,877 [main] WARN
>>>    org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - NameNode metrics
>>> system already initialized!
>>> 2012-11-09 15:47:07,877 [main] WARN
>>>    org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name ugi
>>> already exists!
>>> 2012-11-09 15:47:08,233 [main] WARN
>>>    org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name jvm
>>> already exists!
>>> 2012-11-09 15:47:09,454 [IPC Server handler 2 on 28077] WARN
>>>    org.apache.hadoop.hdfs.server.**namenode.FSNamesystem - Not able to
>>> place
>>> enough replicas, still in need of 1
>>> 2012-11-09 15:47:09,455 [IPC Server handler 2 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root cause:java.io.IOException: File
>>> /user/root/hbase.version could only be replicated to 0 nodes, instead of 1
>>> 2012-11-09 15:47:09,456 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>> DFSClient
>>> - DataStreamer Exception: org.apache.hadoop.ipc.**RemoteException:
>>> java.io.IOException: File /user/root/hbase.version could only be
>>> replicated
>>> to 0 nodes, instead of 1
>>> at
>>> org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
>>> getAdditionalBlock(**FSNamesystem.java:1556)
>>> at
>>> org.apache.hadoop.hdfs.server.**namenode.NameNode.addBlock(**
>>> NameNode.java:696)
>>> at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>>> at
>>> sun.reflect.**NativeMethodAccessorImpl.**invoke(**
>>> NativeMethodAccessorImpl.java:**57)
>>> at
>>> sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
>>> DelegatingMethodAccessorImpl.**java:43)
>>> at java.lang.reflect.Method.**invoke(Method.java:601)
>>> at org.apache.hadoop.ipc.RPC$**Server.call(RPC.java:563)
>>> at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:**1388)
>>> at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:**1384)
>>> at java.security.**AccessController.doPrivileged(**Native Method)
>>> at javax.security.auth.Subject.**doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.**UserGroupInformation.doAs(**
>>> UserGroupInformation.java:**1093)
>>> at org.apache.hadoop.ipc.Server$**Handler.run(Server.java:1382)
>>>
>>> at org.apache.hadoop.ipc.Client.**call(Client.java:1066)
>>> at org.apache.hadoop.ipc.RPC$**Invoker.invoke(RPC.java:225)
>>> at $Proxy8.addBlock(Unknown Source)
>>> at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>>> at
>>> sun.reflect.**NativeMethodAccessorImpl.**invoke(**
>>> NativeMethodAccessorImpl.java:**57)
>>> at
>>> sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
>>> DelegatingMethodAccessorImpl.**java:43)
>>> at java.lang.reflect.Method.**invoke(Method.java:601)
>>> at
>>> org.apache.hadoop.io.retry.**RetryInvocationHandler.**invokeMethod(**
>>> RetryInvocationHandler.java:**82)
>>> at
>>> org.apache.hadoop.io.retry.**RetryInvocationHandler.invoke(**
>>> RetryInvocationHandler.java:**59)
>>> at $Proxy8.addBlock(Unknown Source)
>>> at
>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
>>> locateFollowingBlock(**DFSClient.java:3507)
>>> at
>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
>>> nextBlockOutputStream(**DFSClient.java:3370)
>>> at
>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
>>> access$2700(DFSClient.java:**2586)
>>> at
>>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream$**
>>> DataStreamer.run(DFSClient.**java:2826)
>>>
>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>> DFSClient
>>> - Error Recovery for block null bad datanode[0] nodes == null
>>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>>> DFSClient
>>> - Could not get block locations. Source file "/user/root/hbase.version" -
>>> Aborting...
>>> 2012-11-09 15:47:09,462 [IPC Server handler 3 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:47:09,463 [IPC Server handler 3 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:50:09,486 [IPC Server handler 1 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:50:09,487 [IPC Server handler 1 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:51:09,495 [IPC Server handler 4 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:51:09,496 [IPC Server handler 4 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:52:09,521 [IPC Server handler 8 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:52:09,522 [IPC Server handler 8 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] ERROR
>>> org.apache.hadoop.security.**UserGroupInformation -
>>> PriviledgedActionException as:root
>>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>>> failed
>>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>> 2012-11-09 15:54:09,547 [IPC Server handler 4 on 28077] WARN
>>>    org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>>> failed to
>>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>>
>>>
>> ------------------------------**------------------------------**---------
>> To unsubscribe, e-mail: server-dev-unsubscribe@james.**apache.org<se...@james.apache.org>
>> For additional commands, e-mail: server-dev-help@james.apache.**org<se...@james.apache.org>
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org


Re: Building trunk

Posted by Merve Temizer <me...@gmail.com>.
os: Ubuntu 12.04 lts

i use a jdk by specifying on command

sudo JAVA_HOME /home/merve/dev/jdk/jdk1.7.0_03 mvn clean compile install

"mvn --v" outputs

Apache Maven 3.0.4 (r1232337; 2012-01-17 10:44:56+0200)
Maven home: /usr/local/apache-maven-3.0.4
Java version: 1.7.0_03, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_03/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.0.0-16-generic", arch: "i386", family: "unix"

thanks very much for time.

2012/11/9 Eric Charles <er...@apache.org>

> Hi Merve,
> Can you send env details: os and jdk version?
>
> Thx, Eric
>
>
> On 09/11/2012 13:59, Merve Temizer wrote:
>
>> Hello,
>>
>> I had problems on hbase by building tag 3.0 beta,
>>
>> then i checked out trunk from
>>
>> http://svn.apache.org/repos/**asf/james/server/trunk<http://svn.apache.org/repos/asf/james/server/trunk>
>>
>> I am using maven 3 to "mvn clean compile install"
>>
>> Below is the console output:
>>
>> Running org.apache.james.domainlist.**hbase.HBaseDomainListTest
>> 2012-11-09 15:47:07,044 [main] WARN
>>   org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name ugi
>> already exists!
>> 2012-11-09 15:47:07,201 [main] WARN
>>   org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name ugi
>> already exists!
>> Starting DataNode 0 with dfs.data.dir:
>> /home/merve/dev/source/james2/**trunk/data-hbase/target/test-**
>> data/b84d7dd5-91ad-4082-b02f-**411efb69948b/dfscluster_**
>> d7a93b05-d4b6-4248-80c6-**7f509a9a3446/dfs/data/data1,/**
>> home/merve/dev/source/james2/**trunk/data-hbase/target/test-**
>> data/b84d7dd5-91ad-4082-b02f-**411efb69948b/dfscluster_**
>> d7a93b05-d4b6-4248-80c6-**7f509a9a3446/dfs/data/data2
>> 2012-11-09 15:47:07,877 [main] WARN
>>   org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - NameNode metrics
>> system already initialized!
>> 2012-11-09 15:47:07,877 [main] WARN
>>   org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name ugi
>> already exists!
>> 2012-11-09 15:47:08,233 [main] WARN
>>   org.apache.hadoop.metrics2.**impl.MetricsSystemImpl - Source name jvm
>> already exists!
>> 2012-11-09 15:47:09,454 [IPC Server handler 2 on 28077] WARN
>>   org.apache.hadoop.hdfs.server.**namenode.FSNamesystem - Not able to
>> place
>> enough replicas, still in need of 1
>> 2012-11-09 15:47:09,455 [IPC Server handler 2 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root cause:java.io.IOException: File
>> /user/root/hbase.version could only be replicated to 0 nodes, instead of 1
>> 2012-11-09 15:47:09,456 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>> DFSClient
>> - DataStreamer Exception: org.apache.hadoop.ipc.**RemoteException:
>> java.io.IOException: File /user/root/hbase.version could only be
>> replicated
>> to 0 nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
>> getAdditionalBlock(**FSNamesystem.java:1556)
>> at
>> org.apache.hadoop.hdfs.server.**namenode.NameNode.addBlock(**
>> NameNode.java:696)
>> at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>> at
>> sun.reflect.**NativeMethodAccessorImpl.**invoke(**
>> NativeMethodAccessorImpl.java:**57)
>> at
>> sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
>> DelegatingMethodAccessorImpl.**java:43)
>> at java.lang.reflect.Method.**invoke(Method.java:601)
>> at org.apache.hadoop.ipc.RPC$**Server.call(RPC.java:563)
>> at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:**1388)
>> at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:**1384)
>> at java.security.**AccessController.doPrivileged(**Native Method)
>> at javax.security.auth.Subject.**doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.**UserGroupInformation.doAs(**
>> UserGroupInformation.java:**1093)
>> at org.apache.hadoop.ipc.Server$**Handler.run(Server.java:1382)
>>
>> at org.apache.hadoop.ipc.Client.**call(Client.java:1066)
>> at org.apache.hadoop.ipc.RPC$**Invoker.invoke(RPC.java:225)
>> at $Proxy8.addBlock(Unknown Source)
>> at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native Method)
>> at
>> sun.reflect.**NativeMethodAccessorImpl.**invoke(**
>> NativeMethodAccessorImpl.java:**57)
>> at
>> sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
>> DelegatingMethodAccessorImpl.**java:43)
>> at java.lang.reflect.Method.**invoke(Method.java:601)
>> at
>> org.apache.hadoop.io.retry.**RetryInvocationHandler.**invokeMethod(**
>> RetryInvocationHandler.java:**82)
>> at
>> org.apache.hadoop.io.retry.**RetryInvocationHandler.invoke(**
>> RetryInvocationHandler.java:**59)
>> at $Proxy8.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
>> locateFollowingBlock(**DFSClient.java:3507)
>> at
>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
>> nextBlockOutputStream(**DFSClient.java:3370)
>> at
>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream.**
>> access$2700(DFSClient.java:**2586)
>> at
>> org.apache.hadoop.hdfs.**DFSClient$DFSOutputStream$**
>> DataStreamer.run(DFSClient.**java:2826)
>>
>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>> DFSClient
>> - Error Recovery for block null bad datanode[0] nodes == null
>> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.**
>> DFSClient
>> - Could not get block locations. Source file "/user/root/hbase.version" -
>> Aborting...
>> 2012-11-09 15:47:09,462 [IPC Server handler 3 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:47:09,463 [IPC Server handler 3 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:50:09,486 [IPC Server handler 1 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:50:09,487 [IPC Server handler 1 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:51:09,495 [IPC Server handler 4 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:51:09,496 [IPC Server handler 4 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:52:09,521 [IPC Server handler 8 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:52:09,522 [IPC Server handler 8 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] ERROR
>> org.apache.hadoop.security.**UserGroupInformation -
>> PriviledgedActionException as:root
>> cause:org.apache.hadoop.hdfs.**protocol.**AlreadyBeingCreatedException:
>> failed
>> to create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>> 2012-11-09 15:54:09,547 [IPC Server handler 4 on 28077] WARN
>>   org.apache.hadoop.hdfs.**StateChange - DIR* NameSystem.startFile:
>> failed to
>> create file /user/root/hbase.version for DFSClient_1286943058 on client
>> 127.0.0.1 because current leaseholder is trying to recreate file.
>>
>>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: server-dev-unsubscribe@james.**apache.org<se...@james.apache.org>
> For additional commands, e-mail: server-dev-help@james.apache.**org<se...@james.apache.org>
>
>

Re: Building trunk

Posted by Eric Charles <er...@apache.org>.
Hi Merve,
Can you send env details: os and jdk version?

Thx, Eric

On 09/11/2012 13:59, Merve Temizer wrote:
> Hello,
>
> I had problems on hbase by building tag 3.0 beta,
>
> then i checked out trunk from
>
> http://svn.apache.org/repos/asf/james/server/trunk
>
> I am using maven 3 to "mvn clean compile install"
>
> Below is the console output:
>
> Running org.apache.james.domainlist.hbase.HBaseDomainListTest
> 2012-11-09 15:47:07,044 [main] WARN
>   org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name ugi
> already exists!
> 2012-11-09 15:47:07,201 [main] WARN
>   org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name ugi
> already exists!
> Starting DataNode 0 with dfs.data.dir:
> /home/merve/dev/source/james2/trunk/data-hbase/target/test-data/b84d7dd5-91ad-4082-b02f-411efb69948b/dfscluster_d7a93b05-d4b6-4248-80c6-7f509a9a3446/dfs/data/data1,/home/merve/dev/source/james2/trunk/data-hbase/target/test-data/b84d7dd5-91ad-4082-b02f-411efb69948b/dfscluster_d7a93b05-d4b6-4248-80c6-7f509a9a3446/dfs/data/data2
> 2012-11-09 15:47:07,877 [main] WARN
>   org.apache.hadoop.metrics2.impl.MetricsSystemImpl - NameNode metrics
> system already initialized!
> 2012-11-09 15:47:07,877 [main] WARN
>   org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name ugi
> already exists!
> 2012-11-09 15:47:08,233 [main] WARN
>   org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Source name jvm
> already exists!
> 2012-11-09 15:47:09,454 [IPC Server handler 2 on 28077] WARN
>   org.apache.hadoop.hdfs.server.namenode.FSNamesystem - Not able to place
> enough replicas, still in need of 1
> 2012-11-09 15:47:09,455 [IPC Server handler 2 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root cause:java.io.IOException: File
> /user/root/hbase.version could only be replicated to 0 nodes, instead of 1
> 2012-11-09 15:47:09,456 [Thread-44] WARN  org.apache.hadoop.hdfs.DFSClient
> - DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File /user/root/hbase.version could only be replicated
> to 0 nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> at $Proxy8.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at $Proxy8.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>
> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.DFSClient
> - Error Recovery for block null bad datanode[0] nodes == null
> 2012-11-09 15:47:09,460 [Thread-44] WARN  org.apache.hadoop.hdfs.DFSClient
> - Could not get block locations. Source file "/user/root/hbase.version" -
> Aborting...
> 2012-11-09 15:47:09,462 [IPC Server handler 3 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:47:09,463 [IPC Server handler 3 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:48:09,469 [IPC Server handler 6 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:49:09,477 [IPC Server handler 8 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:50:09,486 [IPC Server handler 1 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:50:09,487 [IPC Server handler 1 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:51:09,495 [IPC Server handler 4 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:51:09,496 [IPC Server handler 4 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:52:09,515 [IPC Server handler 7 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:52:09,521 [IPC Server handler 8 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:52:09,522 [IPC Server handler 8 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:53:09,534 [IPC Server handler 1 on 28077] ERROR
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root
> cause:org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed
> to create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
> 2012-11-09 15:54:09,547 [IPC Server handler 4 on 28077] WARN
>   org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.startFile: failed to
> create file /user/root/hbase.version for DFSClient_1286943058 on client
> 127.0.0.1 because current leaseholder is trying to recreate file.
>

---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org