You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Noah Watkins <ja...@cs.ucsc.edu> on 2012/04/04 21:40:02 UTC

Failing to diagnose errors using HBaseTestingUtility

I am working on a project that uses HBase and want to develop my unit tests to use the HBaseTestingUtility to test without a separate cluster, but I am running into a lot problems using it. Everything compiles fine, but there my unit tests that try to spin up a cluster blow up:

This is the basic test I am trying to run:

public class PersiaAdminTest {
  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();

  @BeforeClass
  public static void setUpBeforeClass() throws Exception {
    TEST_UTIL.startMiniCluster(1);
  }

  @AfterClass
  public static void tearDownAfterClass() throws Exception {
    TEST_UTIL.shutdownMiniCluster();
  }

  @Test
  public void testPersiaAdmin() throws IOException {
    assertTrue(true);
  }
}

My pom.xml has the following dependencies:

  hadoop-core-1.0.0
  hadoop-test-1.0.0
  hbase-0.92.1
  hbase-0.92.1 (test-jar)
  junit-4.10

And the error message I am getting is: (note that when I download the 0.92.1 release tar ball, the unit tests that use HBaseTestingUtility don't have this issue):

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.154 sec <<< FAILURE!
edu.ucsc.srl.persia.PersiaAdminTest  Time elapsed: 6.155 sec  <<< ERROR!
java.io.IOException: Shutting down
	at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:203)
	at org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:523)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:503)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:451)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:438)
	at edu.ucsc.srl.persia.PersiaAdminTest.setUpBeforeClass(PersiaAdminTest.java:22)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
	at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
	at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
	at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
Caused by: java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
	at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:157)
	at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:198)
	at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:148)
	at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:185)
	... 28 more
Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.type.JavaType.isFullyTyped()Z
	at org.codehaus.jackson.map.type.ArrayType.<init>(ArrayType.java:36)
	at org.codehaus.jackson.map.type.ArrayType.construct(ArrayType.java:48)
	at org.codehaus.jackson.map.type.ArrayType.addCommonTypes(ArrayType.java:78)
	at org.codehaus.jackson.map.type.TypeFactory.<init>(TypeFactory.java:43)
	at org.codehaus.jackson.map.type.TypeFactory.<clinit>(TypeFactory.java:15)
	at org.codehaus.jackson.map.ObjectMapper.<clinit>(ObjectMapper.java:42)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.<clinit>(WritableRpcEngine.java:258)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:245)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:55)
	at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:401)
	at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:390)
	at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:244)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
	at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:154)
	... 31 more

=====
Below is the log. It looks like the problem is because the cluster isn't starting up, but I'm still clueless what's wrong.
=====

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running edu.ucsc.srl.persia.PersiaAdminTest
12/04/04 12:22:09 INFO hbase.HBaseTestingUtility: Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s)
12/04/04 12:22:09 INFO util.GSet: VM type       = 64-bit
12/04/04 12:22:09 INFO util.GSet: 2% max memory = 53.51125 MB
12/04/04 12:22:09 INFO util.GSet: capacity      = 2^23 = 8388608 entries
12/04/04 12:22:09 INFO util.GSet: recommended=8388608, actual=8388608
12/04/04 12:22:09 INFO namenode.FSNamesystem: fsOwner=nwatkins
12/04/04 12:22:09 INFO namenode.FSNamesystem: supergroup=supergroup
12/04/04 12:22:09 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/04/04 12:22:09 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/04/04 12:22:09 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/04/04 12:22:09 INFO namenode.NameNode: Caching file names occuring more than 10 times 
12/04/04 12:22:09 INFO common.Storage: Image file of size 114 saved in 0 seconds.
12/04/04 12:22:10 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/name1 has been successfully formatted.
12/04/04 12:22:10 INFO common.Storage: Image file of size 114 saved in 0 seconds.
12/04/04 12:22:10 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/name2 has been successfully formatted.
12/04/04 12:22:10 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-namenode.properties, hadoop-metrics2.properties
12/04/04 12:22:10 INFO util.GSet: VM type       = 64-bit
12/04/04 12:22:10 INFO util.GSet: 2% max memory = 53.51125 MB
12/04/04 12:22:10 INFO util.GSet: capacity      = 2^23 = 8388608 entries
12/04/04 12:22:10 INFO util.GSet: recommended=8388608, actual=8388608
12/04/04 12:22:10 INFO namenode.FSNamesystem: fsOwner=nwatkins
12/04/04 12:22:10 INFO namenode.FSNamesystem: supergroup=supergroup
12/04/04 12:22:10 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/04/04 12:22:10 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/04/04 12:22:10 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/04/04 12:22:10 INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
12/04/04 12:22:10 INFO namenode.NameNode: Caching file names occuring more than 10 times 
12/04/04 12:22:10 INFO common.Storage: Number of files = 1
12/04/04 12:22:10 INFO common.Storage: Number of files under construction = 0
12/04/04 12:22:10 INFO common.Storage: Image file of size 114 loaded in 0 seconds.
12/04/04 12:22:10 INFO common.Storage: Edits file /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/name1/current/edits of size 4 edits # 0 loaded in 0 seconds.
12/04/04 12:22:10 INFO common.Storage: Image file of size 114 saved in 0 seconds.
12/04/04 12:22:11 INFO common.Storage: Image file of size 114 saved in 0 seconds.
12/04/04 12:22:11 INFO common.Storage: Image file of size 114 saved in 0 seconds.
12/04/04 12:22:11 INFO common.Storage: Image file of size 114 saved in 0 seconds.
12/04/04 12:22:12 INFO namenode.NameCache: initialized with 0 entries 0 lookups
12/04/04 12:22:12 INFO namenode.FSNamesystem: Finished loading FSImage in 1564 msecs
12/04/04 12:22:12 INFO namenode.FSNamesystem: Total number of blocks = 0
12/04/04 12:22:12 INFO namenode.FSNamesystem: Number of invalid blocks = 0
12/04/04 12:22:12 INFO namenode.FSNamesystem: Number of under-replicated blocks = 0
12/04/04 12:22:12 INFO namenode.FSNamesystem: Number of  over-replicated blocks = 0
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 16 msec
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* Leaving safe mode after 1 secs.
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
12/04/04 12:22:12 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
12/04/04 12:22:12 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
12/04/04 12:22:12 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
12/04/04 12:22:12 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
12/04/04 12:22:12 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
12/04/04 12:22:12 INFO ipc.Server: Starting SocketReader
12/04/04 12:22:12 INFO namenode.NameNode: Namenode up at: localhost/127.0.0.1:40677
12/04/04 12:22:12 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
12/04/04 12:22:12 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
12/04/04 12:22:12 INFO http.HttpServer: dfs.webhdfs.enabled = false
12/04/04 12:22:12 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
12/04/04 12:22:12 INFO http.HttpServer: listener.getLocalPort() returned 38803 webServer.getConnectors()[0].getLocalPort() returned 38803
12/04/04 12:22:12 INFO http.HttpServer: Jetty bound to port 38803
12/04/04 12:22:12 INFO mortbay.log: jetty-6.1.26
12/04/04 12:22:12 INFO mortbay.log: Extract jar:file:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/hdfs to /tmp/Jetty_localhost_38803_hdfs____n6pj6o/webapp
12/04/04 12:22:12 INFO mortbay.log: Started SelectChannelConnector@localhost:38803
12/04/04 12:22:12 INFO namenode.NameNode: Web-server up at: localhost:38803
12/04/04 12:22:12 INFO ipc.Server: IPC Server Responder: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server listener on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 0 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 1 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 2 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 3 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 4 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 5 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 6 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 8 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 9 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 7 on 40677: starting
Starting DataNode 0 with dfs.data.dir: /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2
12/04/04 12:22:12 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-datanode.properties, hadoop-metrics2.properties
12/04/04 12:22:12 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1 is not formatted.
12/04/04 12:22:12 INFO common.Storage: Formatting ...
12/04/04 12:22:13 INFO common.Storage: Storage directory /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2 is not formatted.
12/04/04 12:22:13 INFO common.Storage: Formatting ...
12/04/04 12:22:13 INFO datanode.DataNode: Registered FSDatasetStatusMBean
12/04/04 12:22:13 INFO datanode.DataNode: Opened info server at 46482
12/04/04 12:22:13 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
12/04/04 12:22:13 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
12/04/04 12:22:13 INFO datanode.DataNode: dfs.webhdfs.enabled = false
12/04/04 12:22:13 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
12/04/04 12:22:13 INFO http.HttpServer: listener.getLocalPort() returned 56944 webServer.getConnectors()[0].getLocalPort() returned 56944
12/04/04 12:22:13 INFO http.HttpServer: Jetty bound to port 56944
12/04/04 12:22:13 INFO mortbay.log: jetty-6.1.26
12/04/04 12:22:13 INFO mortbay.log: Extract jar:file:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/datanode to /tmp/Jetty_localhost_56944_datanode____ec2djz/webapp
12/04/04 12:22:13 INFO mortbay.log: Started SelectChannelConnector@localhost:56944
12/04/04 12:22:13 INFO ipc.Server: Starting SocketReader
12/04/04 12:22:13 INFO datanode.DataNode: dnRegistration = DatanodeRegistration(127.0.0.1:46482, storageID=, infoPort=56944, ipcPort=57074)
12/04/04 12:22:13 INFO hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:46482 storage DS-878374159-127.0.0.1-46482-1333567333542
12/04/04 12:22:13 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:46482
12/04/04 12:22:13 INFO datanode.DataNode: New storage id DS-878374159-127.0.0.1-46482-1333567333542 is assigned to data-node 127.0.0.1:46482
12/04/04 12:22:13 INFO datanode.DataNode: Finished generating blocks being written report for 2 volumes in 0 seconds
12/04/04 12:22:13 INFO hdfs.StateChange: *BLOCK* NameNode.blocksBeingWrittenReport: from 127.0.0.1:46482 0 blocks
12/04/04 12:22:13 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:46482, storageID=DS-878374159-127.0.0.1-46482-1333567333542, infoPort=56944, ipcPort=57074)In DataNode.run, data = FSDataset{dirpath='/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1/current,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2/current'}
12/04/04 12:22:13 INFO ipc.Server: IPC Server Responder: starting
12/04/04 12:22:13 INFO ipc.Server: IPC Server listener on 57074: starting
12/04/04 12:22:13 INFO datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
12/04/04 12:22:13 INFO ipc.Server: IPC Server handler 2 on 57074: starting
12/04/04 12:22:13 INFO ipc.Server: IPC Server handler 0 on 57074: starting
12/04/04 12:22:13 INFO ipc.Server: IPC Server handler 1 on 57074: starting
12/04/04 12:22:13 INFO hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:46482, blocks: 0, processing time: 2 msecs
12/04/04 12:22:13 INFO datanode.DataNode: BlockReport of 0 blocks took 4 msec to generate and 14 msecs for RPC and NN processing
12/04/04 12:22:13 INFO datanode.DataNode: Starting Periodic block scanner.
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:host.name=localhost
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:java.version=1.6.0_23
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:java.vendor=Sun Microsystems Inc.
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:java.home=/usr/lib/jvm/java-6-openjdk/jre
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:java.class.path=/home/nwatkins/projects/ssdb-hadoop/persia/target/test-classes:/home/nwatkins/projects/ssdb-hadoop/persia/target/classes:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar:/home/nwatkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/nwatkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/nwatkins/.m2/repository/commons-httpclient/commons-httpclient/3.0.1/commons-httpclient-3.0.1.jar:/home/nwatkins/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/nwatkins/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/nwatkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/nwatkins/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/nwatkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/nwatkins/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api/2.5-20081211/servlet-api-2.5-20081211.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/nwatkins/.m2/repository/tomcat/jasper-runtime/5.5.12/jasper-runtime-5.5.12.jar:/home/nwatkins/.m2/repository/tomcat/jasper-compiler/5.5.12/jasper-compiler-5.5.12.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/ant/ant/1.6.5/ant-1.6.5.jar:/home/nwatkins/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/nwatkins/.m2/repository/net/java/dev/jets3t/jets3t/0.7.1/jets3t-0.7.1.jar:/home/nwatkins/.m2/repository/net/sf/kosmosfs/kfs/0.3/kfs-0.3.jar:/home/nwatkins/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/nwatkins/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/nwatkins/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.0.1/jackson-mapper-asl-1.0.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1.jar:/home/nwatkins/.m2/repository/com/google/guava/guava/r09/guava-r09.jar:/home/nwatkins/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/nwatkins/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/nwatkins/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/nwatkins/.m2/repository/log4j/log4j/1.2.16/log4j-1.2.16.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro/1.5.3/avro-1.5.3.jar:/home/nwatkins/.m2/repository/org/xerial/snappy/snappy-java/1.0.3.2/snappy-java-1.0.3.2.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/nwatkins/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/nwatkins/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/nwatkins/.m2/repository/org/apache/zookeeper/zookeeper/3.4.3/zookeeper-3.4.3.jar:/home/nwatkins/.m2/repository/org/apache/thrift/libthrift/0.7.0/libthrift-0.7.0.jar:/home/nwatkins/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpclient/4.0.1/httpclient-4.0.1.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpcore/4.0.1/httpcore-4.0.1.jar:/home/nwatkins/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.5.5/jackson-core-asl-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.5.5/jackson-jaxrs-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.5.5/jackson-xc-1.5.5.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-api/1.5.8/slf4j-api-1.5.8.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-log4j12/1.5.8/slf4j-log4j12-1.5.8.jar:/home/nwatkins/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/nwatkins/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-core/1.4/jersey-core-1.4.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-json/1.4/jersey-json-1.4.jar:/home/nwatkins/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/nwatkins/.m2/repository/com/sun/xml/bind/jaxb-impl/2.1.12/jaxb-impl-2.1.12.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-server/1.4/jersey-server-1.4.jar:/home/nwatkins/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/nwatkins/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/nwatkins/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/nwatkins/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/nwatkins/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/nwatkins/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1-tests.jar:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-test/1.0.0/hadoop-test-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/home/nwatkins/projects/openmpi_java/install/lib::/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:java.compiler=<NA>
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:os.name=Linux
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:os.arch=amd64
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:os.version=3.0.0-12-server
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:user.name=nwatkins
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:user.home=/home/nwatkins
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:user.dir=/home/nwatkins/projects/ssdb-hadoop/persia
12/04/04 12:22:13 INFO server.ZooKeeperServer: Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/zookeeper_0/version-2 snapdir /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/zookeeper_0/version-2
12/04/04 12:22:13 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:21818
12/04/04 12:22:13 INFO zookeeper.MiniZooKeeperCluster: Failed binding ZK Server to client port: 21818
12/04/04 12:22:13 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:21819
12/04/04 12:22:13 INFO persistence.FileTxnSnapLog: Snapshotting: 0x0 to /home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/zookeeper_0/version-2/snapshot.0
12/04/04 12:22:13 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40465
12/04/04 12:22:13 INFO server.NIOServerCnxn: Processing stat command from /127.0.0.1:40465
12/04/04 12:22:13 INFO server.NIOServerCnxn: Stat command output
12/04/04 12:22:13 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40465 (no session established for client)
12/04/04 12:22:13 INFO zookeeper.MiniZooKeeperCluster: Started MiniZK Cluster and connect 1 ZK server on client port: 21819
12/04/04 12:22:13 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/hbase.version. blk_-6717307937846793406_1001
12/04/04 12:22:13 INFO datanode.DataNode: Receiving block blk_-6717307937846793406_1001 src: /127.0.0.1:54917 dest: /127.0.0.1:46482
12/04/04 12:22:13 INFO DataNode.clienttrace: src: /127.0.0.1:54917, dest: /127.0.0.1:46482, bytes: 3, op: HDFS_WRITE, cliID: DFSClient_-1337989537, offset: 0, srvID: DS-878374159-127.0.0.1-46482-1333567333542, blockid: blk_-6717307937846793406_1001, duration: 1127639
12/04/04 12:22:13 INFO datanode.DataNode: PacketResponder 0 for block blk_-6717307937846793406_1001 terminating
12/04/04 12:22:13 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:46482 is added to blk_-6717307937846793406_1001 size 3
12/04/04 12:22:14 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/hbase.version from client DFSClient_-1337989537
12/04/04 12:22:14 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/hbase.version is closed by DFSClient_-1337989537
12/04/04 12:22:14 ERROR hbase.MiniHBaseCluster: Error starting cluster
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
	at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:157)
	at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:198)
	at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:148)
	at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:185)
	at org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:523)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:503)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:451)
	at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:438)
	at edu.ucsc.srl.persia.PersiaAdminTest.setUpBeforeClass(PersiaAdminTest.java:22)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
	at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
	at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
	at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.type.JavaType.isFullyTyped()Z
	at org.codehaus.jackson.map.type.ArrayType.<init>(ArrayType.java:36)
	at org.codehaus.jackson.map.type.ArrayType.construct(ArrayType.java:48)
	at org.codehaus.jackson.map.type.ArrayType.addCommonTypes(ArrayType.java:78)
	at org.codehaus.jackson.map.type.TypeFactory.<init>(TypeFactory.java:43)
	at org.codehaus.jackson.map.type.TypeFactory.<clinit>(TypeFactory.java:15)
	at org.codehaus.jackson.map.ObjectMapper.<clinit>(ObjectMapper.java:42)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.<clinit>(WritableRpcEngine.java:258)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:245)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:55)
	at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:401)
	at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:390)
	at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:244)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
	at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:154)
	... 31 more
12/04/04 12:22:14 INFO hbase.HBaseTestingUtility: Shutting down minicluster
12/04/04 12:22:14 INFO server.NIOServerCnxnFactory: NIOServerCnxn factory exited run method
12/04/04 12:22:14 INFO server.ZooKeeperServer: shutting down
12/04/04 12:22:14 INFO server.SessionTrackerImpl: Shutting down
12/04/04 12:22:14 INFO server.PrepRequestProcessor: Shutting down
12/04/04 12:22:14 INFO server.SyncRequestProcessor: Shutting down
12/04/04 12:22:14 INFO server.PrepRequestProcessor: PrepRequestProcessor exited loop!
12/04/04 12:22:14 INFO server.SyncRequestProcessor: SyncRequestProcessor exited!
12/04/04 12:22:14 INFO server.FinalRequestProcessor: shutdown of request processor complete
12/04/04 12:22:14 INFO zookeeper.MiniZooKeeperCluster: Shutdown MiniZK cluster with all ZK servers
Shutting down the Mini HDFS Cluster
Shutting down DataNode 0
12/04/04 12:22:14 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
12/04/04 12:22:14 INFO ipc.Server: Stopping server on 57074
12/04/04 12:22:14 INFO ipc.Server: IPC Server handler 0 on 57074: exiting
12/04/04 12:22:14 INFO ipc.Server: IPC Server handler 2 on 57074: exiting
12/04/04 12:22:14 INFO ipc.Server: Stopping IPC Server listener on 57074
12/04/04 12:22:14 INFO ipc.Server: IPC Server handler 1 on 57074: exiting
12/04/04 12:22:14 INFO ipc.Server: Stopping IPC Server Responder
12/04/04 12:22:14 INFO metrics.RpcInstrumentation: shut down
12/04/04 12:22:14 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
12/04/04 12:22:14 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:46482, storageID=DS-878374159-127.0.0.1-46482-1333567333542, infoPort=56944, ipcPort=57074):DataXceiveServer:java.nio.channels.AsynchronousCloseException
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:170)
	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:102)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
	at java.lang.Thread.run(Thread.java:679)

12/04/04 12:22:14 INFO datanode.DataNode: Exiting DataXceiveServer
12/04/04 12:22:14 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
12/04/04 12:22:15 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
12/04/04 12:22:15 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:46482, storageID=DS-878374159-127.0.0.1-46482-1333567333542, infoPort=56944, ipcPort=57074):Finishing DataNode in: FSDataset{dirpath='/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1/current,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2/current'}
12/04/04 12:22:15 WARN util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
	at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:726)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1442)
	at java.lang.Thread.run(Thread.java:679)
12/04/04 12:22:15 INFO ipc.Server: Stopping server on 57074
12/04/04 12:22:15 INFO metrics.RpcInstrumentation: shut down
12/04/04 12:22:15 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
12/04/04 12:22:15 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
12/04/04 12:22:15 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
12/04/04 12:22:15 WARN util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1046794048
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1046794048
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
	at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550)
	at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:350)
	at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:576)
	at edu.ucsc.srl.persia.PersiaAdminTest.tearDownAfterClass(PersiaAdminTest.java:27)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:36)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
	at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
	at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
	at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
12/04/04 12:22:15 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
12/04/04 12:22:15 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
12/04/04 12:22:15 INFO namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
	at java.lang.Thread.run(Thread.java:679)
12/04/04 12:22:15 INFO namenode.FSNamesystem: Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 4 SyncTimes(ms): 96 44 
12/04/04 12:22:15 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
12/04/04 12:22:15 INFO ipc.Server: Stopping server on 40677
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 0 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 3 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 1 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 4 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 2 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 8 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 9 on 40677: exiting
12/04/04 12:22:15 INFO metrics.RpcInstrumentation: shut down
12/04/04 12:22:15 INFO ipc.Server: Stopping IPC Server listener on 40677
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 6 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 5 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: Stopping IPC Server Responder
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 7 on 40677: exiting
12/04/04 12:22:15 INFO hbase.HBaseTestingUtility: Minicluster is down


Re: Failing to diagnose errors using HBaseTestingUtility

Posted by Stack <st...@duboce.net>.
On Wed, Apr 4, 2012 at 1:53 PM, Noah Watkins <ja...@cs.ucsc.edu> wrote:
>
> On Apr 4, 2012, at 1:51 PM, Stack wrote:
>
>> On Wed, Apr 4, 2012 at 1:49 PM, Noah Watkins <ja...@cs.ucsc.edu> wrote:
>>> Could this be a sore point? Perhaps Oracle might behave better?
>
> I guess not. I'm using the same JVM to run the 0.92.1 release tar ball and don't get any of these issues when running the tests. Sigh.
>


If you don't see complaints like the below:

Caused by: java.lang.ClassNotFoundException: [Lbyte;
       at java.lang.Class.forName0(Native Method)
       at java.lang.Class.forName(Class.java:186)
       at com.sun.jmx.mbeanserver.OpenConverter.makeArrayOrCollectionConverter(OpenConverter.java:344)

when you run hbase unit tests, then something odd about your environment.

Google your exception?  You might get a clue (funny how someone else
has almost always had your problem first).

St.Ack

Re: Failing to diagnose errors using HBaseTestingUtility

Posted by Noah Watkins <ja...@cs.ucsc.edu>.
On Apr 4, 2012, at 1:51 PM, Stack wrote:

> On Wed, Apr 4, 2012 at 1:49 PM, Noah Watkins <ja...@cs.ucsc.edu> wrote:
>> Could this be a sore point? Perhaps Oracle might behave better?

I guess not. I'm using the same JVM to run the 0.92.1 release tar ball and don't get any of these issues when running the tests. Sigh.


Re: Failing to diagnose errors using HBaseTestingUtility

Posted by Stack <st...@duboce.net>.
On Wed, Apr 4, 2012 at 1:49 PM, Noah Watkins <ja...@cs.ucsc.edu> wrote:
> Could this be a sore point? Perhaps Oracle might behave better?

Try it.
St.Ack

Re: Failing to diagnose errors using HBaseTestingUtility

Posted by Noah Watkins <ja...@cs.ucsc.edu>.
On Apr 4, 2012, at 1:43 PM, Stack wrote:

> On Wed, Apr 4, 2012 at 1:22 PM, Noah Watkins <ja...@cs.ucsc.edu> wrote:
>> Doh! Not sure how i missed that. Thank you. My tests are now reported as being run successfully, but there are still many exceptions that are/look concerning. I'd really like to resolve these to keep the logs clean, but at the very least, knowing that they harmless would be acceptable :) -- Any thoughts? I saw something similar in an old JIRA about setting umask to 0022, which looked related, but I've never dealt with MXBean errors before.
>> 
> 
> Is this trunk?  Looks like the exceptions don't kill your test but

I don't think so? These test are run without a live HBase installation. So all the dependencies are brought in from the default Maven repository. Does that answer your question?

> please file an issue to clean them up.  What JVM are you on?

nwatkins@kyoto:~/projects$ java -version
java version "1.6.0_23"
OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.2)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)

Could this be a sore point? Perhaps Oracle might behave better?

Re: Failing to diagnose errors using HBaseTestingUtility

Posted by Stack <st...@duboce.net>.
On Wed, Apr 4, 2012 at 1:22 PM, Noah Watkins <ja...@cs.ucsc.edu> wrote:
> Doh! Not sure how i missed that. Thank you. My tests are now reported as being run successfully, but there are still many exceptions that are/look concerning. I'd really like to resolve these to keep the logs clean, but at the very least, knowing that they harmless would be acceptable :) -- Any thoughts? I saw something similar in an old JIRA about setting umask to 0022, which looked related, but I've never dealt with MXBean errors before.
>

Is this trunk?  Looks like the exceptions don't kill your test but
please file an issue to clean them up.  What JVM are you on?

St.Ack

Re: Failing to diagnose errors using HBaseTestingUtility

Posted by Noah Watkins <ja...@cs.ucsc.edu>.
On Apr 4, 2012, at 12:51 PM, Stack wrote:

> Caused by: java.lang.NoSuchMethodError:
> org.codehaus.jackson.type.JavaType.isFullyTyped()

Doh! Not sure how i missed that. Thank you. My tests are now reported as being run successfully, but there are still many exceptions that are/look concerning. I'd really like to resolve these to keep the logs clean, but at the very least, knowing that they harmless would be acceptable :) -- Any thoughts? I saw something similar in an old JIRA about setting umask to 0022, which looked related, but I've never dealt with MXBean errors before.

Thanks,
Noah

12/04/04 13:05:10 INFO master.AssignmentManager: Clean cluster startup. Assigning userregions
javax.management.NotCompliantMBeanException: org.apache.hadoop.hbase.master.MXBean: Method org.apache.hadoop.hbase.master.MXBean.getRegionServers has parameter or return type that cannot be translated into an open type
	at com.sun.jmx.mbeanserver.Introspector.throwException(Introspector.java:419)
	at com.sun.jmx.mbeanserver.MBeanAnalyzer.<init>(MBeanAnalyzer.java:118)
	at com.sun.jmx.mbeanserver.MBeanAnalyzer.analyzer(MBeanAnalyzer.java:104)
	at com.sun.jmx.mbeanserver.MXBeanIntrospector.getAnalyzer(MXBeanIntrospector.java:71)
	at com.sun.jmx.mbeanserver.MBeanIntrospector.getPerInterface(MBeanIntrospector.java:181)
	at com.sun.jmx.mbeanserver.MBeanSupport.<init>(MBeanSupport.java:136)
	at com.sun.jmx.mbeanserver.MXBeanSupport.<init>(MXBeanSupport.java:66)
	at com.sun.jmx.mbeanserver.Introspector.makeDynamicMBean(Introspector.java:184)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:936)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:516)
	at org.apache.hadoop.metrics.util.MBeanUtil.registerMBean(MBeanUtil.java:58)
	at org.apache.hadoop.hbase.master.HMaster.registerMBean(HMaster.java:1646)
	at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:527)
	at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:336)
	at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.IllegalArgumentException: Method org.apache.hadoop.hbase.master.MXBean.getRegionServers has parameter or return type that cannot be translated into an open type
	at com.sun.jmx.mbeanserver.ConvertingMethod.from(ConvertingMethod.java:45)
	at com.sun.jmx.mbeanserver.MXBeanIntrospector.mFrom(MXBeanIntrospector.java:81)
	at com.sun.jmx.mbeanserver.MXBeanIntrospector.mFrom(MXBeanIntrospector.java:51)
	at com.sun.jmx.mbeanserver.MBeanAnalyzer.initMaps(MBeanAnalyzer.java:135)
	at com.sun.jmx.mbeanserver.MBeanAnalyzer.<init>(MBeanAnalyzer.java:116)
	... 14 more
Caused by: javax.management.openmbean.OpenDataException: Cannot obtain array class
	at com.sun.jmx.mbeanserver.OpenConverter.openDataException(OpenConverter.java:1389)
	at com.sun.jmx.mbeanserver.OpenConverter.makeArrayOrCollectionConverter(OpenConverter.java:346)
	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:295)
	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
	at com.sun.jmx.mbeanserver.OpenConverter.makeTabularConverter(OpenConverter.java:376)
	at com.sun.jmx.mbeanserver.OpenConverter.makeParameterizedConverter(OpenConverter.java:417)
	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:312)
	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
	at com.sun.jmx.mbeanserver.OpenConverter.makeCompositeConverter(OpenConverter.java:482)
	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:309)
	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
	at com.sun.jmx.mbeanserver.OpenConverter.makeTabularConverter(OpenConverter.java:377)
	at com.sun.jmx.mbeanserver.OpenConverter.makeParameterizedConverter(OpenConverter.java:417)
	at com.sun.jmx.mbeanserver.OpenConverter.makeConverter(OpenConverter.java:312)
	at com.sun.jmx.mbeanserver.OpenConverter.toConverter(OpenConverter.java:277)
	at com.sun.jmx.mbeanserver.ConvertingMethod.<init>(ConvertingMethod.java:197)
	at com.sun.jmx.mbeanserver.ConvertingMethod.from(ConvertingMethod.java:40)
	... 18 more
Caused by: java.lang.ClassNotFoundException: [Lbyte;
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:186)
	at com.sun.jmx.mbeanserver.OpenConverter.makeArrayOrCollectionConverter(OpenConverter.java:344)
	... 33 more
12/04/04 13:05:10 INFO master.HMaster: Registered HMaster MXBean
12/04/04 13:05:10 INFO master.HMaster: Master has completed initialization
12/04/04 13:05:11 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x1367ef5d1320002
12/04/04 13:05:11 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320002
12/04/04 13:05:11 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320002 closed
12/04/04 13:05:11 INFO hbase.HBaseTestingUtility: Minicluster is up
12/04/04 13:05:11 INFO zookeeper.ClientCnxn: EventThread shut down
12/04/04 13:05:11 INFO hbase.HBaseTestingUtility: Shutting down minicluster
12/04/04 13:05:11 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53724 which had sessionid 0x1367ef5d1320002
12/04/04 13:05:11 INFO master.HMaster: Cluster shutdown requested
12/04/04 13:05:11 INFO regionserver.HRegionServer: STOPPED: Shutdown requested
12/04/04 13:05:11 INFO master.ServerManager: Waiting on regionserver(s) to go down localhost,60207,1333569902087
12/04/04 13:05:11 INFO master.HMaster$2: localhost,35192,1333569901811-BalancerChore exiting
12/04/04 13:05:11 INFO master.CatalogJanitor: localhost,35192,1333569901811-CatalogJanitor exiting
12/04/04 13:05:12 INFO master.SplitLogManager$TimeoutMonitor: localhost,35192,1333569901811.splitLogManagerTimeoutMonitor exiting
12/04/04 13:05:12 INFO master.ServerManager: Waiting on regionserver(s) to go down localhost,60207,1333569902087
12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping server on 60207
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 1 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 0 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 5 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 2 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 6 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 7 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 9 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 4 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping IPC Server listener on 60207
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 2 on 60207: exiting
12/04/04 13:05:12 INFO regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 0 on 60207: exiting
12/04/04 13:05:12 INFO regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException
12/04/04 13:05:12 INFO regionserver.SplitLogWorker: SplitLogWorker localhost,60207,1333569902087 exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 3 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 8 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 7 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 1 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 9 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 3 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping IPC Server Responder
12/04/04 13:05:12 INFO ipc.HBaseServer: Stopping IPC Server Responder
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 5 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: IPC Server handler 4 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 6 on 60207: exiting
12/04/04 13:05:12 INFO ipc.HBaseServer: PRI IPC Server handler 8 on 60207: exiting
12/04/04 13:05:12 INFO regionserver.HRegionServer: Stopping infoServer
12/04/04 13:05:12 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60030
12/04/04 13:05:12 INFO regionserver.HRegionServer$CompactionChecker: RegionServer:0;localhost,60207,1333569902087.compactionChecker exiting
12/04/04 13:05:12 INFO regionserver.LogRoller: LogRoller exiting.
12/04/04 13:05:12 INFO regionserver.MemStoreFlusher: RegionServer:0;localhost,60207,1333569902087.cacheFlusher exiting
12/04/04 13:05:12 WARN regionserver.HRegionServer: Received close for region we are already opening or closing; 70236052
12/04/04 13:05:12 INFO regionserver.HRegionServer: stopping server localhost,60207,1333569902087
12/04/04 13:05:12 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x1367ef5d1320004
12/04/04 13:05:12 INFO regionserver.HRegion: Closed .META.,,1.1028785192
12/04/04 13:05:12 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320004
12/04/04 13:05:12 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320004 closed
12/04/04 13:05:12 INFO zookeeper.ClientCnxn: EventThread shut down
12/04/04 13:05:12 INFO regionserver.HRegionServer: Waiting on 1 regions to close
12/04/04 13:05:12 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53728 which had sessionid 0x1367ef5d1320004
12/04/04 13:05:12 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /user/nwatkins/hbase/-ROOT-/70236052/.tmp/bb4ce7aee8ac4d1e8320b984d6ce5b6c. blk_-2312968814331638487_1010
12/04/04 13:05:12 INFO datanode.DataNode: Receiving block blk_-2312968814331638487_1010 src: /127.0.0.1:46171 dest: /127.0.0.1:56956
12/04/04 13:05:12 INFO DataNode.clienttrace: src: /127.0.0.1:46171, dest: /127.0.0.1:56956, bytes: 721, op: HDFS_WRITE, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 709184
12/04/04 13:05:12 INFO datanode.DataNode: PacketResponder 0 for block blk_-2312968814331638487_1010 terminating
12/04/04 13:05:12 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_-2312968814331638487_1010 size 721
12/04/04 13:05:13 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/bb4ce7aee8ac4d1e8320b984d6ce5b6c from client DFSClient_hb_rs_localhost,60207,1333569902087
12/04/04 13:05:13 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/-ROOT-/70236052/.tmp/bb4ce7aee8ac4d1e8320b984d6ce5b6c is closed by DFSClient_hb_rs_localhost,60207,1333569902087
12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46172, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 485712
12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46173, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 426832
12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46174, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 422864
12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:56956, dest: /127.0.0.1:46175, bytes: 729, op: HDFS_READ, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_-2312968814331638487_1010, duration: 443916
12/04/04 13:05:13 INFO regionserver.Store: Added hdfs://localhost:44291/user/nwatkins/hbase/-ROOT-/70236052/info/bb4ce7aee8ac4d1e8320b984d6ce5b6c, entries=2, sequenceid=5, memsize=464.0, filesize=721.0
12/04/04 13:05:13 INFO regionserver.HRegion: Finished memstore flush of ~464.0/464, currentsize=0.0/0 for region -ROOT-,,0.70236052 in 533ms, sequenceid=5, compaction requested=false
12/04/04 13:05:13 INFO regionserver.HRegion: Closed -ROOT-,,0.70236052
12/04/04 13:05:13 INFO master.ServerManager: Waiting on regionserver(s) to go down localhost,60207,1333569902087
12/04/04 13:05:13 INFO master.AssignmentManager$TimeoutMonitor: localhost,35192,1333569901811.timeoutMonitor exiting
12/04/04 13:05:13 INFO regionserver.HRegionServer: stopping server localhost,60207,1333569902087; all regions closed.
12/04/04 13:05:13 INFO wal.HLog: RegionServer:0;localhost,60207,1333569902087.logSyncer exiting
12/04/04 13:05:13 INFO DataNode.clienttrace: src: /127.0.0.1:46164, dest: /127.0.0.1:56956, bytes: 408, op: HDFS_WRITE, cliID: DFSClient_hb_rs_localhost,60207,1333569902087, offset: 0, srvID: DS-134885185-127.0.0.1-56956-1333569900707, blockid: blk_7181700300756881384_1009, duration: 6282407448
12/04/04 13:05:13 INFO datanode.DataNode: PacketResponder 0 for block blk_7181700300756881384_1009 terminating
12/04/04 13:05:13 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56956 is added to blk_7181700300756881384_1009 size 408
12/04/04 13:05:13 INFO hdfs.StateChange: Removing lease on  file /user/nwatkins/hbase/.logs/localhost,60207,1333569902087/localhost%2C60207%2C1333569902087.1333569906635 from client DFSClient_hb_rs_localhost,60207,1333569902087
12/04/04 13:05:13 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /user/nwatkins/hbase/.logs/localhost,60207,1333569902087/localhost%2C60207%2C1333569902087.1333569906635 is closed by DFSClient_hb_rs_localhost,60207,1333569902087
12/04/04 13:05:13 INFO regionserver.Leases: RegionServer:0;localhost,60207,1333569902087 closing leases
12/04/04 13:05:13 INFO regionserver.Leases: RegionServer:0;localhost,60207,1333569902087 closed leases
12/04/04 13:05:14 INFO zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [localhost,60207,1333569902087]
12/04/04 13:05:14 INFO master.ServerManager: Cluster shutdown set; localhost,60207,1333569902087 expired; onlineServers=0
12/04/04 13:05:14 INFO master.HMaster: Cluster shutdown set; onlineServer=0
12/04/04 13:05:14 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320001
12/04/04 13:05:14 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53712 which had sessionid 0x1367ef5d1320001
12/04/04 13:05:14 INFO zookeeper.ClientCnxn: EventThread shut down
12/04/04 13:05:14 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320001 closed
12/04/04 13:05:14 INFO regionserver.HRegionServer: stopping server localhost,60207,1333569902087; zookeeper connection closed.
12/04/04 13:05:14 INFO regionserver.HRegionServer: RegionServer:0;localhost,60207,1333569902087 exiting
12/04/04 13:05:14 INFO hbase.MiniHBaseCluster: Hook closing fs=DFS[DFSClient[clientName=DFSClient_hb_rs_localhost,60207,1333569902087, ugi=nwatkins.hfs.0]]
12/04/04 13:05:14 INFO util.JVMClusterUtil: Shutdown of 1 master(s) and 1 regionserver(s) complete
12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping server on 35192
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 6 on 35192: exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping IPC Server listener on 35192
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 0 on 35192: exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 2 on 35192: exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 1 on 35192: exiting
12/04/04 13:05:14 INFO master.HMaster: Stopping infoServer
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 4 on 35192: exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 8 on 35192: exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 7 on 35192: exiting
12/04/04 13:05:14 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010
12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping IPC Server Responder
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 5 on 35192: exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 3 on 35192: exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: Stopping IPC Server Responder
12/04/04 13:05:14 INFO master.LogCleaner: Master:0;localhost,35192,1333569901811.oldLogCleaner exiting
12/04/04 13:05:14 INFO ipc.HBaseServer: IPC Server handler 9 on 35192: exiting
12/04/04 13:05:14 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1367ef5d1320000
12/04/04 13:05:14 INFO zookeeper.ZooKeeper: Session: 0x1367ef5d1320000 closed
12/04/04 13:05:14 INFO zookeeper.ClientCnxn: EventThread shut down
12/04/04 13:05:14 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53711 which had sessionid 0x1367ef5d1320000
12/04/04 13:05:14 INFO master.HMaster: HMaster main thread exiting
12/04/04 13:05:14 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53727 which had sessionid 0x1367ef5d1320003
12/04/04 13:05:14 INFO server.NIOServerCnxnFactory: NIOServerCnxn factory exited run method
12/04/04 13:05:14 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x1367ef5d1320003, likely server has closed socket, closing socket connection and attempting reconnect
12/04/04 13:05:14 INFO server.ZooKeeperServer: shutting down
12/04/04 13:05:14 INFO server.SessionTrackerImpl: Shutting down
12/04/04 13:05:14 INFO server.PrepRequestProcessor: Shutting down
12/04/04 13:05:14 INFO server.SyncRequestProcessor: Shutting down
12/04/04 13:05:14 INFO server.PrepRequestProcessor: PrepRequestProcessor exited loop!
12/04/04 13:05:14 INFO server.SyncRequestProcessor: SyncRequestProcessor exited!
12/04/04 13:05:14 INFO server.FinalRequestProcessor: shutdown of request processor complete
12/04/04 13:05:14 INFO zookeeper.MiniZooKeeperCluster: Shutdown MiniZK cluster with all ZK servers
Shutting down the Mini HDFS Cluster
Shutting down DataNode 0
12/04/04 13:05:14 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
12/04/04 13:05:14 INFO ipc.Server: Stopping server on 60794
12/04/04 13:05:14 INFO ipc.Server: Stopping IPC Server listener on 60794
12/04/04 13:05:14 INFO ipc.Server: IPC Server handler 2 on 60794: exiting
12/04/04 13:05:14 INFO ipc.Server: IPC Server handler 1 on 60794: exiting
12/04/04 13:05:14 INFO ipc.Server: IPC Server handler 0 on 60794: exiting
12/04/04 13:05:14 INFO metrics.RpcInstrumentation: shut down
12/04/04 13:05:14 INFO ipc.Server: Stopping IPC Server Responder
12/04/04 13:05:14 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
12/04/04 13:05:14 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:56956, storageID=DS-134885185-127.0.0.1-56956-1333569900707, infoPort=53398, ipcPort=60794):DataXceiveServer:java.nio.channels.AsynchronousCloseException
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:170)
	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:102)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
	at java.lang.Thread.run(Thread.java:679)

12/04/04 13:05:14 INFO datanode.DataNode: Exiting DataXceiveServer
12/04/04 13:05:14 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
12/04/04 13:05:15 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
12/04/04 13:05:15 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:56956, storageID=DS-134885185-127.0.0.1-56956-1333569900707, infoPort=53398, ipcPort=60794):Finishing DataNode in: FSDataset{dirpath='/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data1/current,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/584f7b08-5e54-4e91-9208-83e5c3b02789/dfscluster_d7fab66b-079d-4947-a97f-a6148b633b38/dfs/data/data2/current'}
12/04/04 13:05:15 WARN util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
	at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:726)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1442)
	at java.lang.Thread.run(Thread.java:679)
12/04/04 13:05:15 INFO ipc.Server: Stopping server on 60794
12/04/04 13:05:15 INFO metrics.RpcInstrumentation: shut down
12/04/04 13:05:15 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
12/04/04 13:05:15 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
12/04/04 13:05:15 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
12/04/04 13:05:15 WARN util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2074696888
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2074696888
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
	at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550)
	at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:350)
	at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:576)
	at edu.ucsc.srl.persia.PersiaAdminTest.tearDownAfterClass(PersiaAdminTest.java:27)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:36)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
	at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
	at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
	at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
12/04/04 13:05:15 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
12/04/04 13:05:15 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
12/04/04 13:05:15 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:21818
12/04/04 13:05:15 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
12/04/04 13:05:15 WARN zookeeper.ClientCnxn: Session 0x1367ef5d1320003 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
12/04/04 13:05:15 INFO namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
	at java.lang.Thread.run(Thread.java:679)
12/04/04 13:05:15 INFO namenode.FSNamesystem: Number of transactions: 67 Total time for transactions(ms): 4Number of transactions batched in Syncs: 2 Number of syncs: 50 SyncTimes(ms): 461 451 
12/04/04 13:05:15 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
12/04/04 13:05:15 INFO ipc.Server: Stopping server on 44291
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 0 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 5 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 6 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 7 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 1 on 44291: exiting
12/04/04 13:05:15 INFO metrics.RpcInstrumentation: shut down
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 2 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 4 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: Stopping IPC Server listener on 44291
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 8 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 9 on 44291: exiting
12/04/04 13:05:15 INFO ipc.Server: Stopping IPC Server Responder
12/04/04 13:05:15 INFO ipc.Server: IPC Server handler 3 on 44291: exiting
12/04/04 13:05:15 INFO hbase.HBaseTestingUtility: Minicluster is down
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.98 sec
12/04/04 13:05:15 INFO regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-0,5,main]
12/04/04 13:05:15 INFO regionserver.HRegionServer: STOPPED: Shutdown hook
12/04/04 13:05:15 INFO regionserver.ShutdownHook: Starting fs shutdown hook thread.
12/04/04 13:05:15 INFO regionserver.ShutdownHook: Shutdown hook finished.


Re: Failing to diagnose errors using HBaseTestingUtility

Posted by Stack <st...@duboce.net>.
On Wed, Apr 4, 2012 at 12:40 PM, Noah Watkins <ja...@cs.ucsc.edu> wrote:
> I am working on a project that uses HBase and want to develop my unit tests to use the HBaseTestingUtility to test without a separate cluster, but I am running into a lot problems using it. Everything compiles fine, but there my unit tests that try to spin up a cluster blow up:
>

Check your classpath.  Looks like jackson is missing (or one of its
supporting jars) going by this I saw in your output:

Caused by: java.lang.NoSuchMethodError:
org.codehaus.jackson.type.JavaType.isFullyTyped()Z

Yes, the master is failing to instantiate because of the above exception.
St.Ack