You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Mingtao Zhang <ma...@gmail.com> on 2014/08/02 18:51:16 UTC

HBaseTestingUtility startMiniCluster throw exception

HI,

I am really stuck with this. Putting the stack trace, java file, hbase-site
file and pom file here.

I have 0 knowledge about hadoop and expecting it's transparent for my
integration test :(.

Thanks in advance!

Best Regards,
Mingtao

The stack trace:

09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Can't
exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd -->
jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
Container Server@9f51be6 +
org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as sessionIdManager
09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Init
SecureRandom.
09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.servlet.HashSessionManager@738f651f
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
filterNameMap={safety=safety, krb5Filter=krb5Filter}
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
pathFilters=[(F=safety,[/*],[],15)]
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletFilterMap=null
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
/cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
/listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
/stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp, /data/*=data,
/contentSummary/*=contentSummary,
/renewDelegationToken=renewDelegationToken,
/getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
*.JSP=jsp, /jmx=jmx}
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp, jmx=jmx,
data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
default=default, logLevel=logLevel, contentSummary=contentSummary,
getimage=getimage, renewDelegationToken=renewDelegationToken}
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting ServletHandler@3fd5e2ae
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
ServletHandler@3fd5e2ae
09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting SecurityHandler@51f35aea
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
SecurityHandler@51f35aea
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting SessionHandler@73152e3f
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
SessionHandler@73152e3f
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
{/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting ErrorPageErrorHandler@4b38117e
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
ErrorPageErrorHandler@4b38117e
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
class
org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
from sun.misc.Launcher$AppClassLoader@23137792
09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class
org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
krb5Filter
09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
sun.misc.Launcher$AppClassLoader@23137792
09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class org.apache.hadoop.http.HttpServer$QuotingInputFilter
09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
safety
09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
conf
09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
cancelDelegationToken
09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
contentSummary
09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
checksum
09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
data
09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
fsck
09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
getDelegationToken
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
getimage
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
listPaths
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
renewDelegationToken
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
stacks
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
jmx
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
logLevel
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
class org.apache.jasper.servlet.JspServlet from
sun.misc.Launcher$AppClassLoader@23137792
09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class org.apache.jasper.servlet.JspServlet
09:42:33.250 [main] [39mDEBUG [0;39m [1;35mo.a.j.compiler.JspRuntimeContext
[0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext([])
/ sun.misc.Launcher$AppClassLoader@23137792
09:42:33.252 [main] [39mDEBUG [0;39m
[1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch dir
for the JSP engine is:
/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
09:42:33.252 [main] [39mDEBUG [0;39m
[1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT: Do
not modify the generated servlets
09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
jsp
09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
class org.mortbay.jetty.servlet.DefaultServlet
09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
class org.mortbay.jetty.servlet.DefaultServlet from
sun.misc.Launcher$AppClassLoader@23137792
09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class org.mortbay.jetty.servlet.DefaultServlet
09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.ResourceCache@5b525b5f
09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
resource base =
file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
default
09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.webapp.WebAppContext@7cbc11d
{/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
Container org.mortbay.jetty.servlet.Context@4e048dc6{/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
+ ErrorHandler@7bece8cf as errorHandler
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
filterNameMap={safety=safety}
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
pathFilters=[(F=safety,[/*],[],15)]
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletFilterMap=null
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting ServletHandler@cf7ea2e
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
ServletHandler@cf7ea2e
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting org.mortbay.jetty.servlet.Context@4e048dc6
{/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting ErrorHandler@7bece8cf
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
ErrorHandler@7bece8cf
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class org.apache.hadoop.http.HttpServer$QuotingInputFilter
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
safety
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class org.apache.hadoop.http.AdminAuthorizedServlet
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.servlet.Context@4e048dc6
{/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
Container org.mortbay.jetty.servlet.Context@6e4f7806{/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
+ ErrorHandler@7ea8ad98 as errorHandler
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
filterNameMap={safety=safety}
09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
pathFilters=[(F=safety,[/*],[],15)]
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletFilterMap=null
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting ServletHandler@23510a7e
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
ServletHandler@23510a7e
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting org.mortbay.jetty.servlet.Context@6e4f7806
{/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting ErrorHandler@7ea8ad98
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
ErrorHandler@7ea8ad98
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class org.apache.hadoop.http.HttpServer$QuotingInputFilter
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
safety
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
class org.mortbay.jetty.servlet.DefaultServlet
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.servlet.DefaultServlet-1788226358
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.servlet.Context@6e4f7806
{/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting ContextHandlerCollection@5a4950dd
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
ContextHandlerCollection@5a4950dd
09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
starting Server@9f51be6
09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
org.mortbay.jetty.nio.SelectChannelConnector$1@501a7f06
09:42:33.272 [main] [34mINFO [0;39m [1;35morg.mortbay.log [0;39m - Started
SelectChannelConnector@localhost.localdomain:1543
09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
SelectChannelConnector@localhost.localdomain:1543
09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
Server@9f51be6
09:42:33.273 [main] [34mINFO [0;39m
[1;35mo.a.h.hdfs.server.namenode.NameNode [0;39m - Web-server up at:
localhost.localdomain:1543
09:42:33.274 [IPC Server listener on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
starting
09:42:33.274 [IPC Server Responder] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder: starting
09:42:33.275 [IPC Server handler 0 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
starting
09:42:33.276 [IPC Server handler 1 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 41118:
starting
09:42:33.277 [IPC Server handler 3 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 41118:
starting
09:42:33.277 [IPC Server handler 4 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 41118:
starting
09:42:33.277 [IPC Server handler 2 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118:
starting
09:42:33.281 [IPC Server handler 5 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 41118:
starting
09:42:33.281 [IPC Server handler 6 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118:
starting
09:42:33.281 [IPC Server handler 7 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 41118:
starting
09:42:33.281 [IPC Server handler 8 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118:
starting
09:42:33.283 [IPC Server handler 9 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 41118:
starting
09:42:33.287 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.fs.FileSystem
[0;39m - Creating filesystem for hdfs://slc05muw.us.**.com:41118
09:42:33.321 [main] [39mDEBUG [0;39m
[1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
multipleLinearRandomRetry = null
09:42:33.328 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - The ping interval is60000ms.
09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - Use SIMPLE authentication for protocol ClientProtocol
09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - Connecting to slc05muw.us.**.com/10.241.3.35:41118
09:42:33.337 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #0
09:42:33.337 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: starting, having
connections 1
09:42:33.337 [IPC Server listener on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Server connection from
10.241.3.35:24701; # active connections: 1; # queued calls: 0
09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Successfully authorized
org.apache.hadoop.hdfs.protocol.ClientProtocol-mingtzha
09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #0
09:42:33.338 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
has #0 from 10.241.3.35:24701
09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
queueTime= 1 procesingTime= 0
09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #0 from 10.241.3.35:24701
09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #0 from 10.241.3.35:24701 Wrote 22 bytes.
09:42:33.340 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #0
09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: getProtocolVersion 17
09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - Short circuit read is false
09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - Connect to datanode via hostname is false
09:42:33.343 [main] [39mDEBUG [0;39m
[1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
multipleLinearRandomRetry = null
09:42:33.343 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #1
09:42:33.344 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #1
09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 41118:
has #1 from 10.241.3.35:24701
09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
queueTime= 0 procesingTime= 0
09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #1 from 10.241.3.35:24701
09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #1 from 10.241.3.35:24701 Wrote 22 bytes.
09:42:33.344 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #1
09:42:33.344 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: getProtocolVersion 1
09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - Short circuit read is false
09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - Connect to datanode via hostname is false
09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #2
09:42:33.345 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #2
09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 41118:
has #2 from 10.241.3.35:24701
09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning fetched groups
for 'mingtzha'
09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getDatanodeReport
queueTime= 0 procesingTime= 11
09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #2 from 10.241.3.35:24701
09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #2 from 10.241.3.35:24701 Wrote 61 bytes.
09:42:33.357 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #2
09:42:33.357 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: getDatanodeReport 12
Cluster is active
09:42:33.376 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
09:42:33.376 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
host.name=slc05muw.us.**.com
09:42:33.376 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:java.version=1.7.0_45
09:42:33.376 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:java.vendor=** Corporation
09:42:33.376 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:java.home=/scratch/mingtzha/jdk1.7.0_45/jre
09:42:33.376 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:java.class.path=/scratch/mingtzha/eclipses/eclipse/plugins/org.testng.eclipse_6.8.6.20141201_2240/lib/testng.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-integ/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-core/target/classes:/home/mingtzha/.m2/repository/org/testng/testng/6.8.7/testng-6.8.7.jar:/home/mingtzha/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/mingtzha/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/mingtzha/.m2/repository/org/beanshell/bsh/2.0b4/bsh-2.0b4.jar:/home/mingtzha/.m2/repository/com/beust/jcommander/1.27/jcommander-1.27.jar:/home/mingtzha/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/mingtzha/.m2/repository/org/assertj/assertj-core/1.5.0/assertj-core-1.5.0.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-testng/2.3.0-b01/hk2-testng-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2/2.3.0-b01/hk2-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/config-types/2.3.0-b01/config-types-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/core/2.3.0-b01/core-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-config/2.3.0-b01/hk2-config-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/jvnet/tiger-types/1.4/tiger-types-1.4.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/bean-validator/2.3.0-b01/bean-validator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-runlevel/2.3.0-b01/hk2-runlevel-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/class-model/2.3.0-b01/class-model-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/asm-all-repackaged/2.3.0-b01/asm-all-repackaged-2.3.0-b01.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-core/target/classes:/home/mingtzha/.m2/repository/org/yaml/snakeyaml/1.13/snakeyaml-1.13.jar:/home/mingtzha/.m2/repository/org/apache/kafka/kafka_2.10/0.8.0/kafka_2.10-0.8.0.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-library/2.10.1/scala-library-2.10.1.jar:/home/mingtzha/.m2/repository/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-compiler/2.10.1/scala-compiler-2.10.1.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-reflect/2.10.1/scala-reflect-2.10.1.jar:/home/mingtzha/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/mingtzha/.m2/repository/org/xerial/snappy/snappy-java/
1.0.4.1/snappy-java-1.0.4.1.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-annotation/2.2.0/metrics-annotation-2.2.0.jar:/home/mingtzha/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/itest-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-model/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-api/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-data/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-avro/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-dev/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-shared/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-spi/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-common/target/classes:/home/mingtzha/.m2/repository/com/googlecode/owasp-java-html-sanitizer/owasp-java-html-sanitizer/r209/owasp-java-html-sanitizer-r209.jar:/home/mingtzha/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/mingtzha/.m2/repository/com/fasterxml/uuid/java-uuid-generator/3.1.3/java-uuid-generator-3.1.3.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-hbase/target/classes:/home/mingtzha/.m2/repository/org/apache/avro/avro/1.7.5/avro-1.7.5.jar:/home/mingtzha/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/mingtzha/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.15/hbase-0.94.15.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.21/hbase-0.94.21-tests.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-core/2.1.2/metrics-core-2.1.2.jar:/home/mingtzha/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/mingtzha/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/mingtzha/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/mingtzha/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/mingtzha/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/mingtzha/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/mingtzha/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/mingtzha/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/mingtzha/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/mingtzha/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/mingtzha/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/mingtzha/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/mingtzha/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/home/mingtzha/.m2/repository/org/apache/thrift/libthrift/0.8.0/libthrift-0.8.0.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpclient/4.1.2/httpclient-4.1.2.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar:/home/mingtzha/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.8/jackson-xc-1.8.8.jar:/home/mingtzha/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/mingtzha/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/mingtzha/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/mingtzha/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-core/1.8/jersey-core-1.8.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-json/1.8/jersey-json-1.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/mingtzha/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar:/home/mingtzha/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/mingtzha/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/mingtzha/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/mingtzha/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar:/home/mingtzha/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/mingtzha/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/mingtzha/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/mingtzha/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/mingtzha/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/mingtzha/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/mingtzha/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-test/1.2.1/hadoop-test-1.2.1.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-ext/1.7.5/slf4j-ext-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/cal10n/cal10n-api/0.7.4/cal10n-api-0.7.4.jar:/home/mingtzha/.m2/repository/org/slf4j/jcl-over-slf4j/1.7.5/jcl-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/log4j-over-slf4j/1.7.5/log4j-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/jul-to-slf4j/1.7.5/jul-to-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-classic/1.0.13/logback-classic-1.0.13.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-core/1.0.13/logback-core-1.0.13.jar:/home/mingtzha/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/mingtzha/.m2/repository/org/fusesource/jansi/jansi/1.11/jansi-1.11.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/config-zookeeper/target/classes:/home/mingtzha/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/mingtzha/.m2/repository/joda-time/joda-time/2.3/joda-time-2.3.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-locator/2.3.0-b01/hk2-locator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/javax.inject/2.3.0-b01/javax.inject-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/aopalliance-repackaged/2.3.0-b01/aopalliance-repackaged-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-api/2.3.0-b01/hk2-api-2.3.0-b01.jar:/home/mingtzha/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-utils/2.3.0-b01/hk2-utils-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:java.io.tmpdir=/tmp
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:java.compiler=<NA>
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
os.name=Linux
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:os.arch=amd64
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:os.version=2.6.39-300.20.1.el6uek.x86_64
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
user.name=mingtzha
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:user.home=/home/mingtzha
09:42:33.377 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
environment:user.dir=/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest
09:42:33.380 [main] [39mDEBUG [0;39m
[1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Opening
datadir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
snapDir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
09:42:33.394 [main] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Created server with
tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir
/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
snapdir
/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
09:42:33.400 [main] [34mINFO [0;39m [1;35mo.a.z.server.NIOServerCnxnFactory
[0;39m - binding to port 0.0.0.0/0.0.0.0:51126
09:42:33.405 [main] [34mINFO [0;39m
[1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Snapshotting: 0x0 to
/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2/snapshot.0
09:42:33.431 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO [0;39m
[1;35mo.a.z.server.NIOServerCnxnFactory [0;39m - Accepted socket connection
from /10.241.3.35:44625
09:42:33.437 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Processing stat command
from /10.241.3.35:44625
09:42:33.442 [Thread-25] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Stat command output
09:42:33.442 [Thread-25] [34mINFO [0;39m
[1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Closed socket connection
for client /10.241.3.35:44625 (no session established for client)
09:42:33.442 [main] [34mINFO [0;39m [1;35mo.a.h.h.z.MiniZooKeeperCluster
[0;39m - Started MiniZK Cluster and connect 1 ZK server on client port:
51126
09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase: masked=rwxr-xr-x
09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #3
09:42:33.444 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #3
09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 41118:
has #3 from 10.241.3.35:24701
09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.mkdirs:
/user/mingtzha/hbase
09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* mkdirs:
/user/mingtzha/hbase
09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
created directory /user
09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
created directory /user/mingtzha
09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
created directory /user/mingtzha/hbase
09:42:33.448 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
bytes at the end of the edit log (offset 4)
09:42:33.452 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
bytes at the end of the edit log (offset 4)
09:42:33.455 [IPC Server handler 4 on 41118] [34mINFO [0;39m
[1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
10.241.3.35    cmd=mkdirs    src=/user/mingtzha/hbase    dst=null
perm=mingtzha:supergroup:rwxr-xr-x
09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: mkdirs queueTime= 0
procesingTime= 10
09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #3 from 10.241.3.35:24701
09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #3 from 10.241.3.35:24701 Wrote 18 bytes.
09:42:33.455 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #3
09:42:33.455 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: mkdirs 12
09:42:33.461 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
09:42:33.468 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
chunkSize=516, chunksPerPacket=127, packetSize=65557
09:42:33.469 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #4
09:42:33.469 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #4
09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118:
has #4 from 10.241.3.35:24701
09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
/user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
at 10.241.3.35
09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
src=/user/mingtzha/hbase/hbase.version,
holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
createParent=true, replication=0, overwrite=true, append=false
09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.479 [IPC Server handler 2 on 41118] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.480 [IPC Server handler 2 on 41118] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.480 [IPC Server handler 2 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118,
call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
10.241.3.35:24701: error: java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #4 from 10.241.3.35:24701
09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #4 from 10.241.3.35:24701 Wrote 1285 bytes.
09:42:33.482 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #4
09:42:33.482 [main] [31mWARN [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
retrying: java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

09:42:33.483 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #5
09:42:33.483 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #5
09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 41118:
has #5 from 10.241.3.35:24701
09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
src=/user/mingtzha/hbase/hbase.version, recursive=false
09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version
09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
/user/mingtzha/hbase/hbase.version
09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.unprotectedDelete: failed to remove
/user/mingtzha/hbase/hbase.version because it does not exist
09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
procesingTime= 1
09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #5 from 10.241.3.35:24701
09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #5 from 10.241.3.35:24701 Wrote 18 bytes.
09:42:33.484 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #5
09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: delete 1
09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
chunkSize=516, chunksPerPacket=127, packetSize=65557
09:42:33.485 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #6
09:42:33.485 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #6
09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118:
has #6 from 10.241.3.35:24701
09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
/user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
at 10.241.3.35
09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
src=/user/mingtzha/hbase/hbase.version,
holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
createParent=true, replication=0, overwrite=true, append=false
09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.486 [IPC Server handler 6 on 41118] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.486 [IPC Server handler 6 on 41118] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.486 [IPC Server handler 6 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118,
call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
10.241.3.35:24701: error: java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #6 from 10.241.3.35:24701
09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #6 from 10.241.3.35:24701 Wrote 1285 bytes.
09:42:33.487 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #6
09:42:33.487 [main] [31mWARN [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
retrying: java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

09:42:33.487 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #7
09:42:33.488 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #7
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 41118:
has #7 from 10.241.3.35:24701
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
src=/user/mingtzha/hbase/hbase.version, recursive=false
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
/user/mingtzha/hbase/hbase.version
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.unprotectedDelete: failed to remove
/user/mingtzha/hbase/hbase.version because it does not exist
09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
procesingTime= 0
09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #7 from 10.241.3.35:24701
09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #7 from 10.241.3.35:24701 Wrote 18 bytes.
09:42:33.489 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #7
09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: delete 2
09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
chunkSize=516, chunksPerPacket=127, packetSize=65557
09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #8
09:42:33.489 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #8
09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118:
has #8 from 10.241.3.35:24701
09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
/user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
at 10.241.3.35
09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
src=/user/mingtzha/hbase/hbase.version,
holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
createParent=true, replication=0, overwrite=true, append=false
09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.491 [IPC Server handler 8 on 41118] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.491 [IPC Server handler 8 on 41118] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.491 [IPC Server handler 8 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118,
call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
10.241.3.35:24701: error: java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #8 from 10.241.3.35:24701
09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #8 from 10.241.3.35:24701 Wrote 1285 bytes.
09:42:33.492 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #8
09:42:33.492 [main] [31mWARN [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
retrying: java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

09:42:33.492 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #9
09:42:33.493 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #9
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 41118:
has #9 from 10.241.3.35:24701
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
src=/user/mingtzha/hbase/hbase.version, recursive=false
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
/user/mingtzha/hbase/hbase.version
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.unprotectedDelete: failed to remove
/user/mingtzha/hbase/hbase.version because it does not exist
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
procesingTime= 0
09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #9 from 10.241.3.35:24701
09:42:33.494 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #9 from 10.241.3.35:24701 Wrote 18 bytes.
09:42:33.494 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #9
09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: delete 2
09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
chunkSize=516, chunksPerPacket=127, packetSize=65557
09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:41118
from mingtzha sending #10
09:42:33.494 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #10
09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
has #10 from 10.241.3.35:24701
09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
/user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
at 10.241.3.35
09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
src=/user/mingtzha/hbase/hbase.version,
holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
createParent=true, replication=0, overwrite=true, append=false
09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
09:42:33.495 [IPC Server handler 0 on 41118] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.495 [IPC Server handler 0 on 41118] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
09:42:33.496 [IPC Server handler 0 on 41118] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118,
call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
10.241.3.35:24701: error: java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
java.io.IOException: failed to create file
/user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #10 from 10.241.3.35:24701
09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #10 from 10.241.3.35:24701 Wrote 1285 bytes.
09:42:33.497 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #10
09:42:33.497 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
HBaseTestSample.setup
09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
HBaseTestSample.testInsert
09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
HBaseTestSample.testInsert
FAILED CONFIGURATION: @BeforeMethod setup
org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to
create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
Requested replication 0 is less than the required minimum 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy10.create(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy10.create(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3451)
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:870)
    at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:205)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
    at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
    at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:360)
    at
org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:774)
    at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:646)
    at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:628)
    at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:576)
    at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:563)
    at
com.**.sites.analytics.repository.itest.endeca.HBaseTestSample.setup(HBaseTestSample.java:101)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
    at
org.testng.internal.MethodInvocationHelper$2.runConfigurationMethod(MethodInvocationHelper.java:292)
    at
org.jvnet.testing.hk2testng.HK2TestListenerAdapter.run(HK2TestListenerAdapter.java:97)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.testng.internal.MethodInvocationHelper.invokeConfigurable(MethodInvocationHelper.java:304)
    at
org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:556)
    at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
    at org.testng.internal.Invoker.invokeMethod(Invoker.java:653)
    at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
    at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
    at
org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
    at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
    at org.testng.TestRunner.privateRun(TestRunner.java:767)
    at org.testng.TestRunner.run(TestRunner.java:617)
    at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
    at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
    at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
    at org.testng.SuiteRunner.run(SuiteRunner.java:240)
    at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
    at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
    at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
    at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
    at org.testng.TestNG.run(TestNG.java:1057)
    at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
    at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
    at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)

SKIPPED CONFIGURATION: @AfterMethod destroy
SKIPPED: testInsert

===============================================
    Default test
    Tests run: 1, Failures: 0, Skips: 1
    Configuration Failures: 1, Skips: 1
===============================================

09:42:33.535 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
[Default suite]

===============================================
Default suite
Total tests run: 1, Failures: 0, Skips: 1
Configuration Failures: 1, Skips: 1
===============================================

[TestNG] Time taken by org.testng.reporters.XMLReporter@71aeef97: 6 ms
[TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 4 ms
[TestNG] Time taken by org.testng.reporters.jq.Main@2b430201: 24 ms
[TestNG] Time taken by org.testng.reporters.JUnitReportReporter@3309b429: 4
ms
[TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@7224eaaa: 8 ms
[TestNG] Time taken by org.testng.reporters.EmailableReporter2@53b74706: 3
ms
09:42:33.588 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of FileSystem
cache with 1 elements.
09:42:33.588 [Thread-0] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - Stopping client
09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: closed
09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: stopped, remaining
connections 0
09:42:33.589 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
disconnecting client 10.241.3.35:24701. Number of active connections: 1
09:42:33.689 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
hdfs://slc05muw.us.**.com:41118
09:42:33.689 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache

The java code:

import java.io.BufferedReader;
import java.io.InputStreamReader;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HBaseTestingUtility;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;

public class HBaseTestSample {

    private static HBaseTestingUtility utility;
    byte[] CF = "CF".getBytes();
    byte[] QUALIFIER = "CQ-1".getBytes();

    @BeforeMethod
    public void setup() throws Exception {
        Configuration hbaseConf = HBaseConfiguration.create();

        utility = new HBaseTestingUtility(hbaseConf);

        Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
        BufferedReader br = new BufferedReader(new InputStreamReader(
                process.getInputStream()));
        int rc = process.waitFor();
        if (rc == 0) {
            String umask = br.readLine();

            int umaskBits = Integer.parseInt(umask, 8);
            int permBits = 0777 & ~umaskBits;
            String perms = Integer.toString(permBits, 8);

            utility.getConfiguration().set("dfs.datanode.data.dir.perm",
perms);
        }

        utility.startMiniCluster(0);

    }

    @Test
    public void testInsert() throws Exception {
        HTable table = utility.createTable(CF, QUALIFIER);

        System.out.println("create table t-f");

        // byte [] family, byte [] qualifier, byte [] value
        table.put(new Put("r".getBytes()).add("f".getBytes(),
"c1".getBytes(),
                "v".getBytes()));
        Result result = table.get(new Get("r".getBytes()));

        System.out.println(result.list().size());

        table.delete(new Delete("r".getBytes()));

        System.out.println("clean up");

    }

    @AfterMethod
    public void destroy() throws Exception {
        utility.cleanupTestDir();
    }
}

hbase-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>file:///scratch/mingtzha/hbase/test</value>
    </property>
    <property>
        <name>hbase.tmp.dir</name>
        <value>/tmp/hbase</value>
    </property>

    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>localhost</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.ipc.warn.response.time</name>
        <value>1</value>
    </property>

    <!-- http://hbase.apache.org/book/ops.monitoring.html -->
    <!-- -1 => Disable logging by size -->
    <!-- <property> -->
    <!-- <name>hbase.ipc.warn.response.size</name> -->
    <!-- <value>-1</value> -->
    <!-- </property> -->
</configuration>

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>com.**.sites.analytics.tests</groupId>
        <artifactId>integration-test</artifactId>
        <version>1.0-SNAPSHOT</version>
    </parent>

    <artifactId>repository-itest</artifactId>
    <name>repository-itest</name>

    <dependencies>
        <dependency>
            <groupId>com.**.sites.analytics</groupId>
            <artifactId>test-integ</artifactId>
            <version>${project.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>com.**.sites.analytics.tests</groupId>
            <artifactId>itest-core</artifactId>
            <version>${project.version}</version>
        </dependency>
        <dependency>
            <groupId>com.**.sites.analytics</groupId>
            <artifactId>config-dev</artifactId>
            <version>${project.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>com.**.sites.analytics</groupId>
            <artifactId>repository-core</artifactId>
            <version>${project.version}</version>
        </dependency>

        <dependency>
            <groupId>com.**.sites.analytics</groupId>
            <artifactId>repository-hbase</artifactId>
            <version>${project.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase</artifactId>
            <version>0.94.21</version>
            <classifier>tests</classifier>
            <exclusions>
                <exclusion>
                    <artifactId>slf4j-log4j12</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
            <version>1.2.1</version>
            <exclusions>
                <exclusion>
                    <artifactId>slf4j-log4j12</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-test</artifactId>
            <version>1.2.1</version>
            <exclusions>
                <exclusion>
                    <artifactId>slf4j-log4j12</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>
</project>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Mingtao Zhang <ma...@gmail.com>.
Oh, the 'Requested replication 0' is gone.

Mingtao


On Sat, Aug 2, 2014 at 10:03 AM, Mingtao Zhang <ma...@gmail.com>
wrote:

> Thank you :)!
>
> Tried ... getting the following stack trace (similar):
>
> Is there any possibility to stop 'hbase.version' file creation?
>
> 10:01:07.142 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_636047722725107470_1001 is
> added to the in-memory file system
> 10:01:07.142 [IPC Server handler 1 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
> /user/mingtzha/hbase/hbase.version. blk_636047722725107470_1001
> 10:01:07.143 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
> is persisted
> 10:01:07.144 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 1
> procesingTime= 2
> 10:01:07.144 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #11 from 10.241.3.35:6390
> 10:01:07.144 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #11 from 10.241.3.35:6390 Wrote 278 bytes.
> 10:01:07.144 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #11
> 10:01:07.144 [Thread-43] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: addBlock 4
> 10:01:07.145 [Thread-43] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline =
> 10.241.3.35:41943
> 10:01:07.145 [Thread-43] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
> 10.241.3.35:41943
> 10:01:07.145 [Thread-43] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
> createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
> Connection refused
> 10:01:07.146 [Thread-43] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
> blk_636047722725107470_1001
> 10:01:07.146 [Thread-43] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #12
> 10:01:07.146 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #12
> 10:01:07.146 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 47220:
> has #12 from 10.241.3.35:6390
> 10:01:07.146 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.146 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.abandonBlock: blk_636047722725107470_1001 of
> /user/mingtzha/hbase/hbase.version
> 10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_636047722725107470_1001of /user/mingtzha/hbase/hbase.version
> 10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_636047722725107470_1001 is
> added to the
> 10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_636047722725107470_1001 is removed from pendingCreates
> 10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
> is persisted
> 10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
> 0 procesingTime= 1
> 10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #12 from 10.241.3.35:6390
> 10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #12 from 10.241.3.35:6390 Wrote 95 bytes.
> 10:01:07.147 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #12
> 10:01:07.148 [Thread-43] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: abandonBlock 2
> 10:01:07.148 [Thread-43] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
> 10.241.3.35:41943
> 10:01:07.148 [Thread-43] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #13
> 10:01:07.148 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #13
> 10:01:07.148 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220:
> has #13 from 10.241.3.35:6390
> 10:01:07.148 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.149 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.149 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
> getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.149 [IPC Server handler 0 on 47220] [31mWARN [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
> enough replicas, still in need of 1 to reach 1
> Not able to place enough replicas
> 10:01:07.149 [IPC Server handler 0 on 47220] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> 10:01:07.150 [IPC Server handler 0 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220,
> call addBlock(/user/mingtzha/hbase/hbase.version,
> DFSClient_NONMAPREDUCE_-1468295212_1,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@40819d8c) from
> 10.241.3.35:6390: error: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
> replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 10:01:07.152 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #13 from 10.241.3.35:6390
> 10:01:07.152 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #13 from 10.241.3.35:6390 Wrote 1070 bytes.
> 10:01:07.152 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #13
> 10:01:07.153 [Thread-43] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
>
> 10:01:07.153 [Thread-43] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
> blk_636047722725107470_1001 bad datanode[0] nodes == null
> 10:01:07.153 [Thread-43] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
> locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
> 10:01:07.154 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase,
> retrying: java.io.IOException: File /user/mingtzha/hbase/hbase.version
> could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 10:01:07.154 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha sending #14
> 10:01:07.157 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #14
> 10:01:07.157 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 47220:
> has #14 from 10.241.3.35:6390
> 10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: /user/mingtzha/hbase/hbase.version is removed
> 10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.LeaseManager [0;39m - LeaseManager.findLease:
> prefix=/user/mingtzha/hbase/hbase.version
> 10:01:07.159 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.LeaseManager [0;39m -
> LeaseManager.removeLeaseWithPrefixPath:
> entry=/user/mingtzha/hbase/hbase.version=[Lease.  Holder:
> DFSClient_NONMAPREDUCE_-1468295212_1, pendingcreates: 1]
> 10:01:07.160 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version is removed
> 10:01:07.160 [IPC Server handler 4 on 47220] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=delete    src=/user/mingtzha/hbase/hbase.version
> dst=null    perm=null
> 10:01:07.161 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 1
> procesingTime= 2
> 10:01:07.161 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #14 from 10.241.3.35:6390
> 10:01:07.161 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #14 from 10.241.3.35:6390 Wrote 18 bytes.
> 10:01:07.161 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #14
> 10:01:07.161 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: delete 7
> 10:01:07.161 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m -
> /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 10:01:07.161 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - computePacketChunkSize:
> src=/user/mingtzha/hbase/hbase.version, chunkSize=516, chunksPerPacket=127,
> packetSize=65557
> 10:01:07.161 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha sending #15
> 10:01:07.161 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #15
> 10:01:07.162 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 47220:
> has #15 from 10.241.3.35:6390
> 10:01:07.162 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.162 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-1468295212_1
> at 10.241.3.35
> 10:01:07.163 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-1468295212_1, clientMachine=10.241.3.35,
> createParent=true, replication=1, overwrite=true, append=false
> 10:01:07.163 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 10:01:07.164 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* addFile:
> /user/mingtzha/hbase/hbase.version is added
> 10:01:07.164 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: add
> /user/mingtzha/hbase/hbase.version to namespace for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.165 [IPC Server handler 5 on 47220] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=create    src=/user/mingtzha/hbase/hbase.version
> dst=null    perm=mingtzha:supergroup:rw-r--r--
> 10:01:07.165 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: create queueTime= 0
> procesingTime= 3
> 10:01:07.165 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #15 from 10.241.3.35:6390
> 10:01:07.165 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #15 from 10.241.3.35:6390 Wrote 95 bytes.
> 10:01:07.165 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #15
> 10:01:07.166 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: create 5
> 10:01:07.166 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Created version file at
> hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase set its version at:7
> 10:01:07.166 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DFSClient writeChunk
> allocating new packet seqno=0, src=/user/mingtzha/hbase/hbase.version,
> packetSize=65557, chunksPerPacket=127, bytesCurBlock=0
> 10:01:07.166 [Thread-45] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Allocating new block
> 10:01:07.166 [Thread-45] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #16
> 10:01:07.167 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #16
> 10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 47220:
> has #16 from 10.241.3.35:6390
> 10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
> getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.168 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_3421159067449879113_1002 is
> added to the in-memory file system
> 10:01:07.168 [IPC Server handler 9 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
> /user/mingtzha/hbase/hbase.version. blk_3421159067449879113_1002
> 10:01:07.168 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
> is persisted
> 10:01:07.168 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 0
> procesingTime= 1
> 10:01:07.169 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #16 from 10.241.3.35:6390
> 10:01:07.169 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #16 from 10.241.3.35:6390 Wrote 278 bytes.
> 10:01:07.169 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #16
> 10:01:07.170 [Thread-45] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: addBlock 4
> 10:01:07.170 [Thread-45] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline =
> 10.241.3.35:41943
> 10:01:07.170 [Thread-45] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
> 10.241.3.35:41943
> 10:01:07.170 [Thread-45] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
> createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
> Connection refused
> 10:01:07.170 [Thread-45] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
> blk_3421159067449879113_1002
> 10:01:07.170 [Thread-45] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #17
> 10:01:07.170 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #17
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 47220:
> has #17 from 10.241.3.35:6390
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.abandonBlock: blk_3421159067449879113_1002 of
> /user/mingtzha/hbase/hbase.version
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_3421159067449879113_1002of /user/mingtzha/hbase/hbase.version
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_3421159067449879113_1002 is
> added to the
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_3421159067449879113_1002 is removed from pendingCreates
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
> is persisted
> 10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
> 0 procesingTime= 0
> 10:01:07.172 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #17 from 10.241.3.35:6390
> 10:01:07.172 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #17 from 10.241.3.35:6390 Wrote 95 bytes.
> 10:01:07.172 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #17
> 10:01:07.172 [Thread-45] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: abandonBlock 2
> 10:01:07.172 [Thread-45] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
> 10.241.3.35:41943
> 10:01:07.172 [Thread-45] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #18
> 10:01:07.172 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #18
> 10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220:
> has #18 from 10.241.3.35:6390
> 10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
> getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.173 [IPC Server handler 7 on 47220] [31mWARN [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
> enough replicas, still in need of 1 to reach 1
> Not able to place enough replicas
> 10:01:07.173 [IPC Server handler 7 on 47220] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> 10:01:07.174 [IPC Server handler 7 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220,
> call addBlock(/user/mingtzha/hbase/hbase.version,
> DFSClient_NONMAPREDUCE_-1468295212_1,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@4b9ac7f5) from
> 10.241.3.35:6390: error: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
> replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 10:01:07.174 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #18 from 10.241.3.35:6390
> 10:01:07.174 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #18 from 10.241.3.35:6390 Wrote 1070 bytes.
> 10:01:07.174 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #18
> 10:01:07.175 [Thread-45] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
>
> 10:01:07.175 [Thread-45] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
> blk_3421159067449879113_1002 bad datanode[0] nodes == null
> 10:01:07.175 [Thread-45] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
> locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
> 10:01:07.177 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase,
> retrying: java.io.IOException: File /user/mingtzha/hbase/hbase.version
> could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 10:01:07.177 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha sending #19
> 10:01:07.177 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #19
> 10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 47220:
> has #19 from 10.241.3.35:6390
> 10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: /user/mingtzha/hbase/hbase.version is removed
> 10:01:07.179 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.LeaseManager [0;39m - LeaseManager.findLease:
> prefix=/user/mingtzha/hbase/hbase.version
> 10:01:07.179 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.LeaseManager [0;39m -
> LeaseManager.removeLeaseWithPrefixPath:
> entry=/user/mingtzha/hbase/hbase.version=[Lease.  Holder:
> DFSClient_NONMAPREDUCE_-1468295212_1, pendingcreates: 1]
> 10:01:07.179 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version is removed
> 10:01:07.179 [IPC Server handler 8 on 47220] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=delete    src=/user/mingtzha/hbase/hbase.version
> dst=null    perm=null
> 10:01:07.180 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 2
> 10:01:07.180 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #19 from 10.241.3.35:6390
> 10:01:07.180 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #19 from 10.241.3.35:6390 Wrote 18 bytes.
> 10:01:07.180 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #19
> 10:01:07.180 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: delete 3
> 10:01:07.180 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m -
> /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 10:01:07.180 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - computePacketChunkSize:
> src=/user/mingtzha/hbase/hbase.version, chunkSize=516, chunksPerPacket=127,
> packetSize=65557
> 10:01:07.180 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha sending #20
> 10:01:07.181 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #20
> 10:01:07.181 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 47220:
> has #20 from 10.241.3.35:6390
> 10:01:07.181 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.181 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-1468295212_1
> at 10.241.3.35
> 10:01:07.201 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-1468295212_1, clientMachine=10.241.3.35,
> createParent=true, replication=1, overwrite=true, append=false
> 10:01:07.201 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 10:01:07.202 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* addFile:
> /user/mingtzha/hbase/hbase.version is added
> 10:01:07.202 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: add
> /user/mingtzha/hbase/hbase.version to namespace for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.203 [IPC Server handler 2 on 47220] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=create    src=/user/mingtzha/hbase/hbase.version
> dst=null    perm=mingtzha:supergroup:rw-r--r--
> 10:01:07.203 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: create queueTime= 0
> procesingTime= 22
> 10:01:07.204 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #20 from 10.241.3.35:6390
> 10:01:07.206 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #20 from 10.241.3.35:6390 Wrote 95 bytes.
> 10:01:07.206 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #20
> 10:01:07.206 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: create 26
> 10:01:07.207 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Created version file at
> hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase set its version at:7
> 10:01:07.207 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DFSClient writeChunk
> allocating new packet seqno=0, src=/user/mingtzha/hbase/hbase.version,
> packetSize=65557, chunksPerPacket=127, bytesCurBlock=0
> 10:01:07.212 [Thread-46] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Allocating new block
> 10:01:07.213 [Thread-46] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #21
> 10:01:07.213 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #21
> 10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 47220:
> has #21 from 10.241.3.35:6390
> 10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
> getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_-3603230159394873750_1003 is
> added to the in-memory file system
> 10:01:07.214 [IPC Server handler 1 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
> /user/mingtzha/hbase/hbase.version. blk_-3603230159394873750_1003
> 10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
> is persisted
> 10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 0
> procesingTime= 1
> 10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #21 from 10.241.3.35:6390
> 10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #21 from 10.241.3.35:6390 Wrote 278 bytes.
> 10:01:07.214 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #21
> 10:01:07.214 [Thread-46] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: addBlock 2
> 10:01:07.214 [Thread-46] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline =
> 10.241.3.35:41943
> 10:01:07.215 [Thread-46] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
> 10.241.3.35:41943
> 10:01:07.215 [Thread-46] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
> createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
> Connection refused
> 10:01:07.215 [Thread-46] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
> blk_-3603230159394873750_1003
> 10:01:07.215 [Thread-46] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #22
> 10:01:07.215 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #22
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 47220:
> has #22 from 10.241.3.35:6390
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.abandonBlock: blk_-3603230159394873750_1003 of
> /user/mingtzha/hbase/hbase.version
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_-3603230159394873750_1003of /user/mingtzha/hbase/hbase.version
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_-3603230159394873750_1003 is
> added to the
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_-3603230159394873750_1003 is removed from pendingCreates
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
> is persisted
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
> 0 procesingTime= 0
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #22 from 10.241.3.35:6390
> 10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #22 from 10.241.3.35:6390 Wrote 95 bytes.
> 10:01:07.216 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #22
> 10:01:07.217 [Thread-46] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: abandonBlock 2
> 10:01:07.217 [Thread-46] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
> 10.241.3.35:41943
> 10:01:07.217 [Thread-46] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #23
> 10:01:07.217 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #23
> 10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220:
> has #23 from 10.241.3.35:6390
> 10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
> getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.218 [IPC Server handler 0 on 47220] [31mWARN [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
> enough replicas, still in need of 1 to reach 1
> Not able to place enough replicas
> 10:01:07.218 [IPC Server handler 0 on 47220] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> 10:01:07.218 [IPC Server handler 0 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220,
> call addBlock(/user/mingtzha/hbase/hbase.version,
> DFSClient_NONMAPREDUCE_-1468295212_1,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@18c3539e) from
> 10.241.3.35:6390: error: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
> replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 10:01:07.219 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #23 from 10.241.3.35:6390
> 10:01:07.219 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #23 from 10.241.3.35:6390 Wrote 1070 bytes.
> 10:01:07.219 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #23
> 10:01:07.219 [Thread-46] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
>
> 10:01:07.219 [Thread-46] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
> blk_-3603230159394873750_1003 bad datanode[0] nodes == null
> 10:01:07.219 [Thread-46] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
> locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
> 10:01:07.219 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase,
> retrying: java.io.IOException: File /user/mingtzha/hbase/hbase.version
> could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 10:01:07.220 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha sending #24
> 10:01:07.220 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #24
> 10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 47220:
> has #24 from 10.241.3.35:6390
> 10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: /user/mingtzha/hbase/hbase.version is removed
> 10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.LeaseManager [0;39m - LeaseManager.findLease:
> prefix=/user/mingtzha/hbase/hbase.version
> 10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.LeaseManager [0;39m -
> LeaseManager.removeLeaseWithPrefixPath:
> entry=/user/mingtzha/hbase/hbase.version=[Lease.  Holder:
> DFSClient_NONMAPREDUCE_-1468295212_1, pendingcreates: 1]
> 10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version is removed
> 10:01:07.221 [IPC Server handler 4 on 47220] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=delete    src=/user/mingtzha/hbase/hbase.version
> dst=null    perm=null
> 10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 1
> 10:01:07.222 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #24 from 10.241.3.35:6390
> 10:01:07.222 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #24 from 10.241.3.35:6390 Wrote 18 bytes.
> 10:01:07.222 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #24
> 10:01:07.222 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: delete 2
> 10:01:07.222 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m -
> /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 10:01:07.222 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - computePacketChunkSize:
> src=/user/mingtzha/hbase/hbase.version, chunkSize=516, chunksPerPacket=127,
> packetSize=65557
> 10:01:07.222 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha sending #25
> 10:01:07.222 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #25
> 10:01:07.222 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 47220:
> has #25 from 10.241.3.35:6390
> 10:01:07.222 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-1468295212_1
> at 10.241.3.35
> 10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-1468295212_1, clientMachine=10.241.3.35,
> createParent=true, replication=1, overwrite=true, append=false
> 10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* addFile:
> /user/mingtzha/hbase/hbase.version is added
> 10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: add
> /user/mingtzha/hbase/hbase.version to namespace for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.224 [IPC Server handler 5 on 47220] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=create    src=/user/mingtzha/hbase/hbase.version
> dst=null    perm=mingtzha:supergroup:rw-r--r--
> 10:01:07.224 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: create queueTime= 1
> procesingTime= 1
> 10:01:07.224 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #25 from 10.241.3.35:6390
> 10:01:07.224 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #25 from 10.241.3.35:6390 Wrote 95 bytes.
> 10:01:07.224 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #25
> 10:01:07.224 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: create 2
> 10:01:07.225 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Created version file at
> hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase set its version at:7
> 10:01:07.225 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DFSClient writeChunk
> allocating new packet seqno=0, src=/user/mingtzha/hbase/hbase.version,
> packetSize=65557, chunksPerPacket=127, bytesCurBlock=0
> 10:01:07.225 [Thread-47] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Allocating new block
> 10:01:07.225 [Thread-47] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #26
> 10:01:07.225 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #26
> 10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 47220:
> has #26 from 10.241.3.35:6390
> 10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
> getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_-686922257401561708_1004 is
> added to the in-memory file system
> 10:01:07.226 [IPC Server handler 9 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
> /user/mingtzha/hbase/hbase.version. blk_-686922257401561708_1004
> 10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
> is persisted
> 10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 1
> procesingTime= 0
> 10:01:07.227 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #26 from 10.241.3.35:6390
> 10:01:07.227 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #26 from 10.241.3.35:6390 Wrote 278 bytes.
> 10:01:07.227 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #26
> 10:01:07.227 [Thread-47] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: addBlock 2
> 10:01:07.227 [Thread-47] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline =
> 10.241.3.35:41943
> 10:01:07.227 [Thread-47] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
> 10.241.3.35:41943
> 10:01:07.227 [Thread-47] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
> createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
> Connection refused
> 10:01:07.227 [Thread-47] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
> blk_-686922257401561708_1004
> 10:01:07.228 [Thread-47] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #27
> 10:01:07.228 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #27
> 10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 47220:
> has #27 from 10.241.3.35:6390
> 10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.abandonBlock: blk_-686922257401561708_1004 of
> /user/mingtzha/hbase/hbase.version
> 10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_-686922257401561708_1004of /user/mingtzha/hbase/hbase.version
> 10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
> /user/mingtzha/hbase/hbase.version with blk_-686922257401561708_1004 is
> added to the
> 10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
> blk_-686922257401561708_1004 is removed from pendingCreates
> 10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
> is persisted
> 10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
> 0 procesingTime= 1
> 10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #27 from 10.241.3.35:6390
> 10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #27 from 10.241.3.35:6390 Wrote 95 bytes.
> 10:01:07.229 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #27
> 10:01:07.229 [Thread-47] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m - Call: abandonBlock 1
> 10:01:07.229 [Thread-47] [34mINFO [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
> 10.241.3.35:41943
> 10:01:07.229 [Thread-47] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #28
> 10:01:07.229 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #28
> 10:01:07.231 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220:
> has #28 from 10.241.3.35:6390
> 10:01:07.232 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 10:01:07.232 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
> NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.232 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
> getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-1468295212_1
> 10:01:07.232 [IPC Server handler 7 on 47220] [31mWARN [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
> enough replicas, still in need of 1 to reach 1
> Not able to place enough replicas
> 10:01:07.233 [IPC Server handler 7 on 47220] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> 10:01:07.233 [IPC Server handler 7 on 47220] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220,
> call addBlock(/user/mingtzha/hbase/hbase.version,
> DFSClient_NONMAPREDUCE_-1468295212_1,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@31c7141c) from
> 10.241.3.35:6390: error: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
> java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
> replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 10:01:07.233 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #28 from 10.241.3.35:6390
> 10:01:07.233 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #28 from 10.241.3.35:6390 Wrote 1070 bytes.
> 10:01:07.233 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #28
> 10:01:07.234 [Thread-47] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
>
> 10:01:07.234 [Thread-47] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
> blk_-686922257401561708_1004 bad datanode[0] nodes == null
> 10:01:07.234 [Thread-47] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
> locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
> 10:01:07.234 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> HBaseTestSample.setup
> 10:01:07.242 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
> HBaseTestSample.testInsert
> 10:01:07.242 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> HBaseTestSample.testInsert
> FAILED CONFIGURATION: @BeforeMethod setup
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
>
> SKIPPED CONFIGURATION: @AfterMethod destroy
> SKIPPED: testInsert
>
> ===============================================
>     Default test
>     Tests run: 1, Failures: 0, Skips: 1
>     Configuration Failures: 1, Skips: 1
> ===============================================
>
> 10:01:07.264 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
> [Default suite]
>
> ===============================================
> Default suite
> Total tests run: 1, Failures: 0, Skips: 1
> Configuration Failures: 1, Skips: 1
> ===============================================
>
> [TestNG] Time taken by org.testng.reporters.XMLReporter@6e6be2c6: 5 ms
> [TestNG] Time taken by org.testng.reporters.EmailableReporter2@49ab9d75:
> 5 ms
> [TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 8 ms
> [TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@686fe5d3: 7
> ms
> [TestNG] Time taken by org.testng.reporters.JUnitReportReporter@31b4f6aa:
> 5 ms
> [TestNG] Time taken by org.testng.reporters.jq.Main@5c6ed020: 25 ms
> 10:01:07.322 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of FileSystem
> cache with 2 elements.
> 10:01:07.330 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
> file:///
> 10:01:07.330 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
> file:///
> 10:01:07.331 [Thread-0] [1;31mERROR [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Failed to close file
> /user/mingtzha/hbase/hbase.version
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
> instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> ~[hadoop-core-1.2.1.jar:na]
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source) ~[na:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> ~[hadoop-core-1.2.1.jar:na]
>     at com.sun.proxy.$Proxy10.addBlock(Unknown Source) ~[na:na]
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
> ~[hadoop-core-1.2.1.jar:na]
> 10:01:07.332 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
> hdfs://slc05muw.us.**.com:47220
> 10:01:07.332 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache
>
>
> Mingtao
>
>
> On Sat, Aug 2, 2014 at 9:57 AM, Matteo Bertozzi <th...@gmail.com>
> wrote:
>
>> The error says: "failed to create file /user/mingtzha/hbase/hbase.version
>> on client 10.241.3.35. Requested replication 0 is less than the required
>> minimum 1"
>>
>> The 0 comes from here: utility.startMiniCluster(0);
>>
>> Matteo
>>
>>
>>
>> On Sat, Aug 2, 2014 at 5:51 PM, Mingtao Zhang <ma...@gmail.com>
>> wrote:
>>
>> > HI,
>> >
>> > I am really stuck with this. Putting the stack trace, java file,
>> hbase-site
>> > file and pom file here.
>> >
>> > I have 0 knowledge about hadoop and expecting it's transparent for my
>> > integration test :(.
>> >
>> > Thanks in advance!
>> >
>> > Best Regards,
>> > Mingtao
>> >
>> > The stack trace:
>> >
>> > 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >
>> >
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
>> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >
>> >
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
>> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
>> > http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
>> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Can't
>> > exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
>> > 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd -->
>> >
>> >
>> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
>> > 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >
>> >
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
>> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > Container Server@9f51be6 +
>> > org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
>> > sessionIdManager
>> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Init
>> > SecureRandom.
>> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
>> > 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.servlet.HashSessionManager@738f651f
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > filterNameMap={safety=safety, krb5Filter=krb5Filter}
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > pathFilters=[(F=safety,[/*],[],15)]
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > servletFilterMap=null
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
>> > /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
>> > /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
>> > /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp,
>> /data/*=data,
>> > /contentSummary/*=contentSummary,
>> > /renewDelegationToken=renewDelegationToken,
>> > /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
>> > *.JSP=jsp, /jmx=jmx}
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp, jmx=jmx,
>> > data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
>> > cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
>> > default=default, logLevel=logLevel, contentSummary=contentSummary,
>> > getimage=getimage, renewDelegationToken=renewDelegationToken}
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting ServletHandler@3fd5e2ae
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > ServletHandler@3fd5e2ae
>> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting SecurityHandler@51f35aea
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > SecurityHandler@51f35aea
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting SessionHandler@73152e3f
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > SessionHandler@73152e3f
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> >
>> >
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting ErrorPageErrorHandler@4b38117e
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > ErrorPageErrorHandler@4b38117e
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>> > class
>> > org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>> > from sun.misc.Launcher$AppClassLoader@23137792
>> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class
>> > org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > krb5Filter
>> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
>> > sun.misc.Launcher$AppClassLoader@23137792
>> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>> > 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > safety
>> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > conf
>> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > cancelDelegationToken
>> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > contentSummary
>> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > checksum
>> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > data
>> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > fsck
>> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > getDelegationToken
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > getimage
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > listPaths
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > renewDelegationToken
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > stacks
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > jmx
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > logLevel
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>> > class org.apache.jasper.servlet.JspServlet from
>> > sun.misc.Launcher$AppClassLoader@23137792
>> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class org.apache.jasper.servlet.JspServlet
>> > 09:42:33.250 [main] [39mDEBUG [0;39m
>> [1;35mo.a.j.compiler.JspRuntimeContext
>> > [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext
>> ([])
>> > / sun.misc.Launcher$AppClassLoader@23137792
>> > 09:42:33.252 [main] [39mDEBUG [0;39m
>> > [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch dir
>> > for the JSP engine is:
>> > /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
>> > 09:42:33.252 [main] [39mDEBUG [0;39m
>> > [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT:
>> Do
>> > not modify the generated servlets
>> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > jsp
>> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>> > class org.mortbay.jetty.servlet.DefaultServlet
>> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>> > class org.mortbay.jetty.servlet.DefaultServlet from
>> > sun.misc.Launcher$AppClassLoader@23137792
>> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class org.mortbay.jetty.servlet.DefaultServlet
>> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
>> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.ResourceCache@5b525b5f
>> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > resource base =
>> > file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
>> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > default
>> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> >
>> >
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > Container org.mortbay.jetty.servlet.Context@4e048dc6
>> >
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>> > + ErrorHandler@7bece8cf as errorHandler
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > filterNameMap={safety=safety}
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > pathFilters=[(F=safety,[/*],[],15)]
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > servletFilterMap=null
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >
>> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >
>> >
>> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting ServletHandler@cf7ea2e
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > ServletHandler@cf7ea2e
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting org.mortbay.jetty.servlet.Context@4e048dc6
>> >
>> >
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting ErrorHandler@7bece8cf
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > ErrorHandler@7bece8cf
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > safety
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class org.apache.hadoop.http.AdminAuthorizedServlet
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.servlet.Context@4e048dc6
>> >
>> >
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > Container org.mortbay.jetty.servlet.Context@6e4f7806
>> >
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>> > + ErrorHandler@7ea8ad98 as errorHandler
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > filterNameMap={safety=safety}
>> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > pathFilters=[(F=safety,[/*],[],15)]
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > servletFilterMap=null
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >
>> >
>> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting ServletHandler@23510a7e
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > ServletHandler@23510a7e
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting org.mortbay.jetty.servlet.Context@6e4f7806
>> >
>> >
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting ErrorHandler@7ea8ad98
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > ErrorHandler@7ea8ad98
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > safety
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>> > class org.mortbay.jetty.servlet.DefaultServlet
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.servlet.DefaultServlet-1788226358
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.servlet.Context@6e4f7806
>> >
>> >
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting ContextHandlerCollection@5a4950dd
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > ContextHandlerCollection@5a4950dd
>> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> > starting Server@9f51be6
>> > 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > org.mortbay.jetty.nio.SelectChannelConnector$1@501a7f06
>> > 09:42:33.272 [main] [34mINFO [0;39m [1;35morg.mortbay.log [0;39m -
>> Started
>> > SelectChannelConnector@localhost.localdomain:1543
>> > 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > SelectChannelConnector@localhost.localdomain:1543
>> > 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>> > Server@9f51be6
>> > 09:42:33.273 [main] [34mINFO [0;39m
>> > [1;35mo.a.h.hdfs.server.namenode.NameNode [0;39m - Web-server up at:
>> > localhost.localdomain:1543
>> > 09:42:33.274 [IPC Server listener on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on
>> 41118:
>> > starting
>> > 09:42:33.274 [IPC Server Responder] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> starting
>> > 09:42:33.275 [IPC Server handler 0 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
>> 41118:
>> > starting
>> > 09:42:33.276 [IPC Server handler 1 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on
>> 41118:
>> > starting
>> > 09:42:33.277 [IPC Server handler 3 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on
>> 41118:
>> > starting
>> > 09:42:33.277 [IPC Server handler 4 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on
>> 41118:
>> > starting
>> > 09:42:33.277 [IPC Server handler 2 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
>> 41118:
>> > starting
>> > 09:42:33.281 [IPC Server handler 5 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on
>> 41118:
>> > starting
>> > 09:42:33.281 [IPC Server handler 6 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
>> 41118:
>> > starting
>> > 09:42:33.281 [IPC Server handler 7 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on
>> 41118:
>> > starting
>> > 09:42:33.281 [IPC Server handler 8 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
>> 41118:
>> > starting
>> > 09:42:33.283 [IPC Server handler 9 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on
>> 41118:
>> > starting
>> > 09:42:33.287 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.fs.FileSystem
>> > [0;39m - Creating filesystem for hdfs://slc05muw.us.**.com:41118
>> > 09:42:33.321 [main] [39mDEBUG [0;39m
>> > [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
>> > multipleLinearRandomRetry = null
>> > 09:42:33.328 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - The ping interval is60000ms.
>> > 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - Use SIMPLE authentication for protocol ClientProtocol
>> > 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - Connecting to slc05muw.us.**.com/10.241.3.35:41118
>> > 09:42:33.337 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #0
>> > 09:42:33.337 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: starting, having
>> > connections 1
>> > 09:42:33.337 [IPC Server listener on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Server connection from
>> > 10.241.3.35:24701; # active connections: 1; # queued calls: 0
>> > 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Successfully authorized
>> > org.apache.hadoop.hdfs.protocol.ClientProtocol-mingtzha
>> > 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #0
>> > 09:42:33.338 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
>> 41118:
>> > has #0 from 10.241.3.35:24701
>> > 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
>> > queueTime= 1 procesingTime= 0
>> > 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #0 from 10.241.3.35:24701
>> > 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #0 from 10.241.3.35:24701 Wrote 22 bytes.
>> > 09:42:33.340 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #0
>> > 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
>> [0;39m
>> > - Call: getProtocolVersion 17
>> > 09:42:33.341 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - Short circuit read is false
>> > 09:42:33.341 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - Connect to datanode via hostname is false
>> > 09:42:33.343 [main] [39mDEBUG [0;39m
>> > [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
>> > multipleLinearRandomRetry = null
>> > 09:42:33.343 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #1
>> > 09:42:33.344 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #1
>> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on
>> 41118:
>> > has #1 from 10.241.3.35:24701
>> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
>> > queueTime= 0 procesingTime= 0
>> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #1 from 10.241.3.35:24701
>> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #1 from 10.241.3.35:24701 Wrote 22 bytes.
>> > 09:42:33.344 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #1
>> > 09:42:33.344 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
>> [0;39m
>> > - Call: getProtocolVersion 1
>> > 09:42:33.345 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - Short circuit read is false
>> > 09:42:33.345 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - Connect to datanode via hostname is false
>> > 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #2
>> > 09:42:33.345 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #2
>> > 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on
>> 41118:
>> > has #2 from 10.241.3.35:24701
>> > 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning fetched
>> groups
>> > for 'mingtzha'
>> > 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getDatanodeReport
>> > queueTime= 0 procesingTime= 11
>> > 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #2 from 10.241.3.35:24701
>> > 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #2 from 10.241.3.35:24701 Wrote 61 bytes.
>> > 09:42:33.357 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #2
>> > 09:42:33.357 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
>> [0;39m
>> > - Call: getDatanodeReport 12
>> > Cluster is active
>> > 09:42:33.376 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52
>> GMT
>> > 09:42:33.376 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
>> > host.name=slc05muw.us.**.com
>> > 09:42:33.376 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:java.version=1.7.0_45
>> > 09:42:33.376 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:java.vendor=** Corporation
>> > 09:42:33.376 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:java.home=/scratch/mingtzha/jdk1.7.0_45/jre
>> > 09:42:33.376 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> >
>> >
>> environment:java.class.path=/scratch/mingtzha/eclipses/eclipse/plugins/org.testng.eclipse_6.8.6.20141201_2240/lib/testng.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-integ/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-core/target/classes:/home/mingtzha/.m2/repository/org/testng/testng/6.8.7/testng-6.8.7.jar:/home/mingtzha/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/mingtzha/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/mingtzha/.m2/repository/org/beanshell/bsh/2.0b4/bsh-2.0b4.jar:/home/mingtzha/.m2/repository/com/beust/jcommander/1.27/jcommander-1.27.jar:/home/mingtzha/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/mingtzha/.m2/repository/org/assertj/assertj-core/1.5.0/assertj-core-1.5.0.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-testng/2.3.0-b01/hk2-testng-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2/2.3.0-b01/hk2-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/config-types/2.3.0-b01/config-types-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/core/2.3.0-b01/core-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-config/2.3.0-b01/hk2-config-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/jvnet/tiger-types/1.4/tiger-types-1.4.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/bean-validator/2.3.0-b01/bean-validator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-runlevel/2.3.0-b01/hk2-runlevel-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/class-model/2.3.0-b01/class-model-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/asm-all-repackaged/2.3.0-b01/asm-all-repackaged-2.3.0-b01.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-core/target/classes:/home/mingtzha/.m2/repository/org/yaml/snakeyaml/1.13/snakeyaml-1.13.jar:/home/mingtzha/.m2/repository/org/apache/kafka/kafka_2.10/0.8.0/kafka_2.10-0.8.0.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-library/2.10.1/scala-library-2.10.1.jar:/home/mingtzha/.m2/repository/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-compiler/2.10.1/scala-compiler-2.10.1.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-reflect/2.10.1/scala-reflect-2.10.1.jar:/home/mingtzha/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/mingtzha/.m2/repository/org/xerial/snappy/snappy-java/
>> >
>> >
>> 1.0.4.1/snappy-java-1.0.4.1.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-annotation/2.2.0/metrics-annotation-2.2.0.jar:/home/mingtzha/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/itest-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-model/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-api/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-data/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-avro/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-dev/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-shared/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-spi/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-common/target/classes:/home/mingtzha/.m2/repository/com/googlecode/owasp-java-html-sanitizer/owasp-java-html-sanitizer/r209/owasp-java-html-sanitizer-r209.jar:/home/mingtzha/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/mingtzha/.m2/repository/com/fasterxml/uuid/java-uuid-generator/3.1.3/java-uuid-generator-3.1.3.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-hbase/target/classes:/home/mingtzha/.m2/repository/org/apache/avro/avro/1.7.5/avro-1.7.5.jar:/home/mingtzha/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/mingtzha/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.15/hbase-0.94.15.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.21/hbase-0.94.21-tests.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-core/2.1.2/metrics-core-2.1.2.jar:/home/mingtzha/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/mingtzha/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/mingtzha/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/mingtzha/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/mingtzha/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/mingtzha/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/mingtzha/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/mingtzha/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/mingtzha/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/mingtzha/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/mingtzha/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/mingtzha/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/mingtzha/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/home/mingtzha/.m2/repository/org/apache/thrift/libthrift/0.8.0/libthrift-0.8.0.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpclient/4.1.2/httpclient-4.1.2.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar:/home/mingtzha/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.8/jackson-xc-1.8.8.jar:/home/mingtzha/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/mingtzha/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/mingtzha/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/mingtzha/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-core/1.8/jersey-core-1.8.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-json/1.8/jersey-json-1.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/mingtzha/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar:/home/mingtzha/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/mingtzha/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/mingtzha/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/mingtzha/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar:/home/mingtzha/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/mingtzha/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/mingtzha/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/mingtzha/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/mingtzha/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/mingtzha/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/mingtzha/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-test/1.2.1/hadoop-test-1.2.1.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-ext/1.7.5/slf4j-ext-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/cal10n/cal10n-api/0.7.4/cal10n-api-0.7.4.jar:/home/mingtzha/.m2/repository/org/slf4j/jcl-over-slf4j/1.7.5/jcl-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/log4j-over-slf4j/1.7.5/log4j-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/jul-to-slf4j/1.7.5/jul-to-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-classic/1.0.13/logback-classic-1.0.13.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-core/1.0.13/logback-core-1.0.13.jar:/home/mingtzha/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/mingtzha/.m2/repository/org/fusesource/jansi/jansi/1.11/jansi-1.11.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/config-zookeeper/target/classes:/home/mingtzha/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/mingtzha/.m2/repository/joda-time/joda-time/2.3/joda-time-2.3.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-locator/2.3.0-b01/hk2-locator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/javax.inject/2.3.0-b01/javax.inject-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/aopalliance-repackaged/2.3.0-b01/aopalliance-repackaged-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-api/2.3.0-b01/hk2-api-2.3.0-b01.jar:/home/mingtzha/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-utils/2.3.0-b01/hk2-utils-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> >
>> >
>> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:java.io.tmpdir=/tmp
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:java.compiler=<NA>
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
>> > os.name=Linux
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:os.arch=amd64
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:os.version=2.6.39-300.20.1.el6uek.x86_64
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
>> > user.name=mingtzha
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> > environment:user.home=/home/mingtzha
>> > 09:42:33.377 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>> >
>> >
>> environment:user.dir=/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest
>> > 09:42:33.380 [main] [39mDEBUG [0;39m
>> > [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Opening
>> >
>> >
>> datadir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
>> >
>> >
>> snapDir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
>> > 09:42:33.394 [main] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Created server with
>> > tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir
>> >
>> >
>> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
>> > snapdir
>> >
>> >
>> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
>> > 09:42:33.400 [main] [34mINFO [0;39m
>> [1;35mo.a.z.server.NIOServerCnxnFactory
>> > [0;39m - binding to port 0.0.0.0/0.0.0.0:51126
>> > 09:42:33.405 [main] [34mINFO [0;39m
>> > [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Snapshotting: 0x0 to
>> >
>> >
>> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2/snapshot.0
>> > 09:42:33.431 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO
>> [0;39m
>> > [1;35mo.a.z.server.NIOServerCnxnFactory [0;39m - Accepted socket
>> connection
>> > from /10.241.3.35:44625
>> > 09:42:33.437 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO
>> [0;39m
>> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Processing stat
>> command
>> > from /10.241.3.35:44625
>> > 09:42:33.442 [Thread-25] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Stat command output
>> > 09:42:33.442 [Thread-25] [34mINFO [0;39m
>> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Closed socket
>> connection
>> > for client /10.241.3.35:44625 (no session established for client)
>> > 09:42:33.442 [main] [34mINFO [0;39m [1;35mo.a.h.h.z.MiniZooKeeperCluster
>> > [0;39m - Started MiniZK Cluster and connect 1 ZK server on client port:
>> > 51126
>> > 09:42:33.443 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - /user/mingtzha/hbase: masked=rwxr-xr-x
>> > 09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #3
>> > 09:42:33.444 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #3
>> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on
>> 41118:
>> > has #3 from 10.241.3.35:24701
>> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.mkdirs:
>> > /user/mingtzha/hbase
>> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* mkdirs:
>> > /user/mingtzha/hbase
>> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> FSDirectory.mkdirs:
>> > created directory /user
>> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> FSDirectory.mkdirs:
>> > created directory /user/mingtzha
>> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> FSDirectory.mkdirs:
>> > created directory /user/mingtzha/hbase
>> > 09:42:33.448 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
>> > bytes at the end of the edit log (offset 4)
>> > 09:42:33.452 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
>> > bytes at the end of the edit log (offset 4)
>> > 09:42:33.455 [IPC Server handler 4 on 41118] [34mINFO [0;39m
>> > [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
>> > 10.241.3.35    cmd=mkdirs    src=/user/mingtzha/hbase    dst=null
>> > perm=mingtzha:supergroup:rwxr-xr-x
>> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: mkdirs queueTime= 0
>> > procesingTime= 10
>> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #3 from 10.241.3.35:24701
>> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #3 from 10.241.3.35:24701 Wrote 18 bytes.
>> > 09:42:33.455 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #3
>> > 09:42:33.455 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
>> [0;39m
>> > - Call: mkdirs 12
>> > 09:42:33.461 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
>> > 09:42:33.468 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
>> > chunkSize=516, chunksPerPacket=127, packetSize=65557
>> > 09:42:33.469 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #4
>> > 09:42:33.469 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #4
>> > 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
>> 41118:
>> > has #4 from 10.241.3.35:24701
>> > 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
>> > /user/mingtzha/hbase/hbase.version for
>> DFSClient_NONMAPREDUCE_-237185081_1
>> > at 10.241.3.35
>> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> > src=/user/mingtzha/hbase/hbase.version,
>> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
>> > createParent=true, replication=0, overwrite=true, append=false
>> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.479 [IPC Server handler 2 on 41118] [31mWARN [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.480 [IPC Server handler 2 on 41118] [1;31mERROR [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
>> > PriviledgedActionException as:mingtzha cause:java.io.IOException:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.480 [IPC Server handler 2 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
>> 41118,
>> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
>> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
>> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > ~[na:1.7.0_45]
>> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> > ~[na:1.7.0_45]
>> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > ~[hadoop-core-1.2.1.jar:na]
>> > 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #4 from 10.241.3.35:24701
>> > 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #4 from 10.241.3.35:24701 Wrote 1285 bytes.
>> > 09:42:33.482 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #4
>> > 09:42:33.482 [main] [31mWARN [0;39m
>> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
>> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
>> > retrying: java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >     at java.lang.reflect.Method.invoke(Method.java:606)
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> >     at javax.security.auth.Subject.doAs(Subject.java:415)
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> >
>> > 09:42:33.483 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #5
>> > 09:42:33.483 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #5
>> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on
>> 41118:
>> > has #5 from 10.241.3.35:24701
>> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
>> > src=/user/mingtzha/hbase/hbase.version, recursive=false
>> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
>> > /user/mingtzha/hbase/hbase.version
>> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> FSDirectory.delete:
>> > /user/mingtzha/hbase/hbase.version
>> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> > FSDirectory.unprotectedDelete: failed to remove
>> > /user/mingtzha/hbase/hbase.version because it does not exist
>> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
>> > procesingTime= 1
>> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #5 from 10.241.3.35:24701
>> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #5 from 10.241.3.35:24701 Wrote 18 bytes.
>> > 09:42:33.484 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #5
>> > 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
>> [0;39m
>> > - Call: delete 1
>> > 09:42:33.484 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
>> > 09:42:33.484 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
>> > chunkSize=516, chunksPerPacket=127, packetSize=65557
>> > 09:42:33.485 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #6
>> > 09:42:33.485 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #6
>> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
>> 41118:
>> > has #6 from 10.241.3.35:24701
>> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
>> > /user/mingtzha/hbase/hbase.version for
>> DFSClient_NONMAPREDUCE_-237185081_1
>> > at 10.241.3.35
>> > 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> > src=/user/mingtzha/hbase/hbase.version,
>> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
>> > createParent=true, replication=0, overwrite=true, append=false
>> > 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.486 [IPC Server handler 6 on 41118] [31mWARN [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.486 [IPC Server handler 6 on 41118] [1;31mERROR [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
>> > PriviledgedActionException as:mingtzha cause:java.io.IOException:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.486 [IPC Server handler 6 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
>> 41118,
>> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
>> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
>> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > ~[na:1.7.0_45]
>> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> > ~[na:1.7.0_45]
>> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > ~[hadoop-core-1.2.1.jar:na]
>> > 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #6 from 10.241.3.35:24701
>> > 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #6 from 10.241.3.35:24701 Wrote 1285 bytes.
>> > 09:42:33.487 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #6
>> > 09:42:33.487 [main] [31mWARN [0;39m
>> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
>> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
>> > retrying: java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >     at java.lang.reflect.Method.invoke(Method.java:606)
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> >     at javax.security.auth.Subject.doAs(Subject.java:415)
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> >
>> > 09:42:33.487 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #7
>> > 09:42:33.488 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #7
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on
>> 41118:
>> > has #7 from 10.241.3.35:24701
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
>> > src=/user/mingtzha/hbase/hbase.version, recursive=false
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
>> > /user/mingtzha/hbase/hbase.version
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> FSDirectory.delete:
>> > /user/mingtzha/hbase/hbase.version
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> > FSDirectory.unprotectedDelete: failed to remove
>> > /user/mingtzha/hbase/hbase.version because it does not exist
>> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
>> > procesingTime= 0
>> > 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #7 from 10.241.3.35:24701
>> > 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #7 from 10.241.3.35:24701 Wrote 18 bytes.
>> > 09:42:33.489 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #7
>> > 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
>> [0;39m
>> > - Call: delete 2
>> > 09:42:33.489 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
>> > 09:42:33.489 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
>> > chunkSize=516, chunksPerPacket=127, packetSize=65557
>> > 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #8
>> > 09:42:33.489 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #8
>> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
>> 41118:
>> > has #8 from 10.241.3.35:24701
>> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
>> > /user/mingtzha/hbase/hbase.version for
>> DFSClient_NONMAPREDUCE_-237185081_1
>> > at 10.241.3.35
>> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> > src=/user/mingtzha/hbase/hbase.version,
>> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
>> > createParent=true, replication=0, overwrite=true, append=false
>> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.491 [IPC Server handler 8 on 41118] [31mWARN [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.491 [IPC Server handler 8 on 41118] [1;31mERROR [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
>> > PriviledgedActionException as:mingtzha cause:java.io.IOException:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.491 [IPC Server handler 8 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
>> 41118,
>> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
>> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
>> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > ~[na:1.7.0_45]
>> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> > ~[na:1.7.0_45]
>> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > ~[hadoop-core-1.2.1.jar:na]
>> > 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #8 from 10.241.3.35:24701
>> > 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #8 from 10.241.3.35:24701 Wrote 1285 bytes.
>> > 09:42:33.492 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #8
>> > 09:42:33.492 [main] [31mWARN [0;39m
>> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
>> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
>> > retrying: java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >     at java.lang.reflect.Method.invoke(Method.java:606)
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> >     at javax.security.auth.Subject.doAs(Subject.java:415)
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> >
>> > 09:42:33.492 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #9
>> > 09:42:33.493 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #9
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on
>> 41118:
>> > has #9 from 10.241.3.35:24701
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
>> > src=/user/mingtzha/hbase/hbase.version, recursive=false
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
>> > /user/mingtzha/hbase/hbase.version
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> FSDirectory.delete:
>> > /user/mingtzha/hbase/hbase.version
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
>> > FSDirectory.unprotectedDelete: failed to remove
>> > /user/mingtzha/hbase/hbase.version because it does not exist
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
>> > procesingTime= 0
>> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #9 from 10.241.3.35:24701
>> > 09:42:33.494 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #9 from 10.241.3.35:24701 Wrote 18 bytes.
>> > 09:42:33.494 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #9
>> > 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
>> [0;39m
>> > - Call: delete 2
>> > 09:42:33.494 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
>> > 09:42:33.494 [main] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.hdfs.DFSClient
>> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
>> > chunkSize=516, chunksPerPacket=127, packetSize=65557
>> > 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118
>> > from mingtzha sending #10
>> > 09:42:33.494 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #10
>> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
>> 41118:
>> > has #10 from 10.241.3.35:24701
>> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
>> > as:mingtzha
>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
>> > /user/mingtzha/hbase/hbase.version for
>> DFSClient_NONMAPREDUCE_-237185081_1
>> > at 10.241.3.35
>> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> > src=/user/mingtzha/hbase/hbase.version,
>> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
>> > createParent=true, replication=0, overwrite=true, append=false
>> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
>> > for 'mingtzha'
>> > 09:42:33.495 [IPC Server handler 0 on 41118] [31mWARN [0;39m
>> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.495 [IPC Server handler 0 on 41118] [1;31mERROR [0;39m
>> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
>> > PriviledgedActionException as:mingtzha cause:java.io.IOException:
>> failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > 09:42:33.496 [IPC Server handler 0 on 41118] [34mINFO [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
>> 41118,
>> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
>> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
>> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> > java.io.IOException: failed to create file
>> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> > ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > ~[na:1.7.0_45]
>> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> > ~[na:1.7.0_45]
>> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> > ~[hadoop-core-1.2.1.jar:na]
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> > ~[hadoop-core-1.2.1.jar:na]
>> > 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #10 from 10.241.3.35:24701
>> > 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
>> > responding to #10 from 10.241.3.35:24701 Wrote 1285 bytes.
>> > 09:42:33.497 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #10
>> > 09:42:33.497 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
>> > HBaseTestSample.setup
>> > 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
>> > HBaseTestSample.testInsert
>> > 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
>> > HBaseTestSample.testInsert
>> > FAILED CONFIGURATION: @BeforeMethod setup
>> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to
>> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
>> > Requested replication 0 is less than the required minimum 1
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >     at java.lang.reflect.Method.invoke(Method.java:606)
>> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>> >     at java.security.AccessController.doPrivileged(Native Method)
>> >     at javax.security.auth.Subject.doAs(Subject.java:415)
>> >     at
>> >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>> >
>> >     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>> >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> >     at com.sun.proxy.$Proxy10.create(Unknown Source)
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >     at java.lang.reflect.Method.invoke(Method.java:606)
>> >     at
>> >
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> >     at
>> >
>> >
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> >     at com.sun.proxy.$Proxy10.create(Unknown Source)
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3451)
>> >     at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:870)
>> >     at
>> >
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:205)
>> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
>> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
>> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
>> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
>> >     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
>> >     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:360)
>> >     at
>> >
>> >
>> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:774)
>> >     at
>> >
>> >
>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:646)
>> >     at
>> >
>> >
>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:628)
>> >     at
>> >
>> >
>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:576)
>> >     at
>> >
>> >
>> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:563)
>> >     at
>> >
>> >
>> com.**.sites.analytics.repository.itest.endeca.HBaseTestSample.setup(HBaseTestSample.java:101)
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >     at java.lang.reflect.Method.invoke(Method.java:606)
>> >     at
>> >
>> >
>> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
>> >     at
>> >
>> >
>> org.testng.internal.MethodInvocationHelper$2.runConfigurationMethod(MethodInvocationHelper.java:292)
>> >     at
>> >
>> >
>> org.jvnet.testing.hk2testng.HK2TestListenerAdapter.run(HK2TestListenerAdapter.java:97)
>> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >     at
>> >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >     at
>> >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >     at java.lang.reflect.Method.invoke(Method.java:606)
>> >     at
>> >
>> >
>> org.testng.internal.MethodInvocationHelper.invokeConfigurable(MethodInvocationHelper.java:304)
>> >     at
>> > org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:556)
>> >     at
>> org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
>> >     at org.testng.internal.Invoker.invokeMethod(Invoker.java:653)
>> >     at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
>> >     at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
>> >     at
>> >
>> >
>> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
>> >     at
>> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
>> >     at org.testng.TestRunner.privateRun(TestRunner.java:767)
>> >     at org.testng.TestRunner.run(TestRunner.java:617)
>> >     at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
>> >     at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
>> >     at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
>> >     at org.testng.SuiteRunner.run(SuiteRunner.java:240)
>> >     at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
>> >     at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
>> >     at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
>> >     at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
>> >     at org.testng.TestNG.run(TestNG.java:1057)
>> >     at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
>> >     at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
>> >     at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
>> >
>> > SKIPPED CONFIGURATION: @AfterMethod destroy
>> > SKIPPED: testInsert
>> >
>> > ===============================================
>> >     Default test
>> >     Tests run: 1, Failures: 0, Skips: 1
>> >     Configuration Failures: 1, Skips: 1
>> > ===============================================
>> >
>> > 09:42:33.535 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
>> > [Default suite]
>> >
>> > ===============================================
>> > Default suite
>> > Total tests run: 1, Failures: 0, Skips: 1
>> > Configuration Failures: 1, Skips: 1
>> > ===============================================
>> >
>> > [TestNG] Time taken by org.testng.reporters.XMLReporter@71aeef97: 6 ms
>> > [TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 4
>> ms
>> > [TestNG] Time taken by org.testng.reporters.jq.Main@2b430201: 24 ms
>> > [TestNG] Time taken by org.testng.reporters.JUnitReportReporter@3309b429
>> :
>> > 4
>> > ms
>> > [TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@7224eaaa:
>> 8
>> > ms
>> > [TestNG] Time taken by org.testng.reporters.EmailableReporter2@53b74706:
>> 3
>> > ms
>> > 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of
>> FileSystem
>> > cache with 1 elements.
>> > 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
>> [1;35morg.apache.hadoop.ipc.Client
>> > [0;39m - Stopping client
>> > 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: closed
>> > 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
>> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection
>> to
>> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: stopped, remaining
>> > connections 0
>> > 09:42:33.589 [pool-1-thread-1] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on
>> 41118:
>> > disconnecting client 10.241.3.35:24701. Number of active connections: 1
>> > 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
>> > hdfs://slc05muw.us.**.com:41118
>> > 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
>> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache
>> >
>> > The java code:
>> >
>> > import java.io.BufferedReader;
>> > import java.io.InputStreamReader;
>> >
>> > import org.apache.hadoop.conf.Configuration;
>> > import org.apache.hadoop.hbase.HBaseConfiguration;
>> > import org.apache.hadoop.hbase.HBaseTestingUtility;
>> > import org.apache.hadoop.hbase.client.Delete;
>> > import org.apache.hadoop.hbase.client.Get;
>> > import org.apache.hadoop.hbase.client.HTable;
>> > import org.apache.hadoop.hbase.client.Put;
>> > import org.apache.hadoop.hbase.client.Result;
>> > import org.apache.hadoop.hbase.util.Bytes;
>> > import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
>> > import org.testng.annotations.AfterMethod;
>> > import org.testng.annotations.BeforeMethod;
>> > import org.testng.annotations.Test;
>> >
>> > public class HBaseTestSample {
>> >
>> >     private static HBaseTestingUtility utility;
>> >     byte[] CF = "CF".getBytes();
>> >     byte[] QUALIFIER = "CQ-1".getBytes();
>> >
>> >     @BeforeMethod
>> >     public void setup() throws Exception {
>> >         Configuration hbaseConf = HBaseConfiguration.create();
>> >
>> >         utility = new HBaseTestingUtility(hbaseConf);
>> >
>> >         Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
>> >         BufferedReader br = new BufferedReader(new InputStreamReader(
>> >                 process.getInputStream()));
>> >         int rc = process.waitFor();
>> >         if (rc == 0) {
>> >             String umask = br.readLine();
>> >
>> >             int umaskBits = Integer.parseInt(umask, 8);
>> >             int permBits = 0777 & ~umaskBits;
>> >             String perms = Integer.toString(permBits, 8);
>> >
>> >             utility.getConfiguration().set("dfs.datanode.data.dir.perm",
>> > perms);
>> >         }
>> >
>> >         utility.startMiniCluster(0);
>> >
>> >     }
>> >
>> >     @Test
>> >     public void testInsert() throws Exception {
>> >         HTable table = utility.createTable(CF, QUALIFIER);
>> >
>> >         System.out.println("create table t-f");
>> >
>> >         // byte [] family, byte [] qualifier, byte [] value
>> >         table.put(new Put("r".getBytes()).add("f".getBytes(),
>> > "c1".getBytes(),
>> >                 "v".getBytes()));
>> >         Result result = table.get(new Get("r".getBytes()));
>> >
>> >         System.out.println(result.list().size());
>> >
>> >         table.delete(new Delete("r".getBytes()));
>> >
>> >         System.out.println("clean up");
>> >
>> >     }
>> >
>> >     @AfterMethod
>> >     public void destroy() throws Exception {
>> >         utility.cleanupTestDir();
>> >     }
>> > }
>> >
>> > hbase-site.xml:
>> >
>> > <?xml version="1.0"?>
>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> > <configuration>
>> >     <property>
>> >         <name>hbase.rootdir</name>
>> >         <value>file:///scratch/mingtzha/hbase/test</value>
>> >     </property>
>> >     <property>
>> >         <name>hbase.tmp.dir</name>
>> >         <value>/tmp/hbase</value>
>> >     </property>
>> >
>> >     <property>
>> >         <name>hbase.zookeeper.quorum</name>
>> >         <value>localhost</value>
>> >     </property>
>> >     <property>
>> >         <name>hbase.cluster.distributed</name>
>> >         <value>true</value>
>> >     </property>
>> >     <property>
>> >         <name>hbase.ipc.warn.response.time</name>
>> >         <value>1</value>
>> >     </property>
>> >
>> >     <!-- http://hbase.apache.org/book/ops.monitoring.html -->
>> >     <!-- -1 => Disable logging by size -->
>> >     <!-- <property> -->
>> >     <!-- <name>hbase.ipc.warn.response.size</name> -->
>> >     <!-- <value>-1</value> -->
>> >     <!-- </property> -->
>> > </configuration>
>> >
>> > pom.xml
>> >
>> > <?xml version="1.0" encoding="UTF-8"?>
>> > <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="
>> > http://www.w3.org/2001/XMLSchema-instance"
>> >     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> > http://maven.apache.org/xsd/maven-4.0.0.xsd">
>> >     <modelVersion>4.0.0</modelVersion>
>> >     <parent>
>> >         <groupId>com.**.sites.analytics.tests</groupId>
>> >         <artifactId>integration-test</artifactId>
>> >         <version>1.0-SNAPSHOT</version>
>> >     </parent>
>> >
>> >     <artifactId>repository-itest</artifactId>
>> >     <name>repository-itest</name>
>> >
>> >     <dependencies>
>> >         <dependency>
>> >             <groupId>com.**.sites.analytics</groupId>
>> >             <artifactId>test-integ</artifactId>
>> >             <version>${project.version}</version>
>> >             <scope>test</scope>
>> >         </dependency>
>> >         <dependency>
>> >             <groupId>com.**.sites.analytics.tests</groupId>
>> >             <artifactId>itest-core</artifactId>
>> >             <version>${project.version}</version>
>> >         </dependency>
>> >         <dependency>
>> >             <groupId>com.**.sites.analytics</groupId>
>> >             <artifactId>config-dev</artifactId>
>> >             <version>${project.version}</version>
>> >             <scope>test</scope>
>> >         </dependency>
>> >         <dependency>
>> >             <groupId>com.**.sites.analytics</groupId>
>> >             <artifactId>repository-core</artifactId>
>> >             <version>${project.version}</version>
>> >         </dependency>
>> >
>> >         <dependency>
>> >             <groupId>com.**.sites.analytics</groupId>
>> >             <artifactId>repository-hbase</artifactId>
>> >             <version>${project.version}</version>
>> >         </dependency>
>> >
>> >         <dependency>
>> >             <groupId>org.apache.hbase</groupId>
>> >             <artifactId>hbase</artifactId>
>> >             <version>0.94.21</version>
>> >             <classifier>tests</classifier>
>> >             <exclusions>
>> >                 <exclusion>
>> >                     <artifactId>slf4j-log4j12</artifactId>
>> >                     <groupId>org.slf4j</groupId>
>> >                 </exclusion>
>> >             </exclusions>
>> >         </dependency>
>> >         <dependency>
>> >             <groupId>org.apache.hadoop</groupId>
>> >             <artifactId>hadoop-core</artifactId>
>> >             <version>1.2.1</version>
>> >             <exclusions>
>> >                 <exclusion>
>> >                     <artifactId>slf4j-log4j12</artifactId>
>> >                     <groupId>org.slf4j</groupId>
>> >                 </exclusion>
>> >             </exclusions>
>> >         </dependency>
>> >         <dependency>
>> >             <groupId>org.apache.hadoop</groupId>
>> >             <artifactId>hadoop-test</artifactId>
>> >             <version>1.2.1</version>
>> >             <exclusions>
>> >                 <exclusion>
>> >                     <artifactId>slf4j-log4j12</artifactId>
>> >                     <groupId>org.slf4j</groupId>
>> >                 </exclusion>
>> >             </exclusions>
>> >         </dependency>
>> >     </dependencies>
>> > </project>
>> >
>>
>
>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Mingtao Zhang <ma...@gmail.com>.
Thank you :)!

Tried ... getting the following stack trace (similar):

Is there any possibility to stop 'hbase.version' file creation?

10:01:07.142 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_636047722725107470_1001 is
added to the in-memory file system
10:01:07.142 [IPC Server handler 1 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
/user/mingtzha/hbase/hbase.version. blk_636047722725107470_1001
10:01:07.143 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
is persisted
10:01:07.144 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 1
procesingTime= 2
10:01:07.144 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #11 from 10.241.3.35:6390
10:01:07.144 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #11 from 10.241.3.35:6390 Wrote 278 bytes.
10:01:07.144 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #11
10:01:07.144 [Thread-43] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: addBlock 4
10:01:07.145 [Thread-43] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline = 10.241.3.35:41943
10:01:07.145 [Thread-43] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
10.241.3.35:41943
10:01:07.145 [Thread-43] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
Connection refused
10:01:07.146 [Thread-43] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
blk_636047722725107470_1001
10:01:07.146 [Thread-43] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #12
10:01:07.146 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #12
10:01:07.146 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 47220:
has #12 from 10.241.3.35:6390
10:01:07.146 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.146 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.abandonBlock: blk_636047722725107470_1001 of
/user/mingtzha/hbase/hbase.version
10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_636047722725107470_1001of /user/mingtzha/hbase/hbase.version
10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_636047722725107470_1001 is
added to the
10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_636047722725107470_1001 is removed from pendingCreates
10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
is persisted
10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
0 procesingTime= 1
10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #12 from 10.241.3.35:6390
10:01:07.147 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #12 from 10.241.3.35:6390 Wrote 95 bytes.
10:01:07.147 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #12
10:01:07.148 [Thread-43] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: abandonBlock 2
10:01:07.148 [Thread-43] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
10.241.3.35:41943
10:01:07.148 [Thread-43] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #13
10:01:07.148 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #13
10:01:07.148 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220:
has #13 from 10.241.3.35:6390
10:01:07.148 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.149 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.149 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.149 [IPC Server handler 0 on 47220] [31mWARN [0;39m
[1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
enough replicas, still in need of 1 to reach 1
Not able to place enough replicas
10:01:07.149 [IPC Server handler 0 on 47220] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
10:01:07.150 [IPC Server handler 0 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220,
call addBlock(/user/mingtzha/hbase/hbase.version,
DFSClient_NONMAPREDUCE_-1468295212_1,
[Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@40819d8c) from
10.241.3.35:6390: error: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
10:01:07.152 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #13 from 10.241.3.35:6390
10:01:07.152 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #13 from 10.241.3.35:6390 Wrote 1070 bytes.
10:01:07.152 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #13
10:01:07.153 [Thread-43] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

10:01:07.153 [Thread-43] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
blk_636047722725107470_1001 bad datanode[0] nodes == null
10:01:07.153 [Thread-43] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
10:01:07.154 [main] [31mWARN [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
version file at hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase,
retrying: java.io.IOException: File /user/mingtzha/hbase/hbase.version
could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

10:01:07.154 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:47220
from mingtzha sending #14
10:01:07.157 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #14
10:01:07.157 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 47220:
has #14 from 10.241.3.35:6390
10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
src=/user/mingtzha/hbase/hbase.version, recursive=false
10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version
10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
/user/mingtzha/hbase/hbase.version
10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.unprotectedDelete: /user/mingtzha/hbase/hbase.version is removed
10:01:07.158 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.LeaseManager [0;39m - LeaseManager.findLease:
prefix=/user/mingtzha/hbase/hbase.version
10:01:07.159 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.LeaseManager [0;39m -
LeaseManager.removeLeaseWithPrefixPath:
entry=/user/mingtzha/hbase/hbase.version=[Lease.  Holder:
DFSClient_NONMAPREDUCE_-1468295212_1, pendingcreates: 1]
10:01:07.160 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version is removed
10:01:07.160 [IPC Server handler 4 on 47220] [34mINFO [0;39m
[1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
10.241.3.35    cmd=delete    src=/user/mingtzha/hbase/hbase.version
dst=null    perm=null
10:01:07.161 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 1
procesingTime= 2
10:01:07.161 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #14 from 10.241.3.35:6390
10:01:07.161 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #14 from 10.241.3.35:6390 Wrote 18 bytes.
10:01:07.161 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #14
10:01:07.161 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: delete 7
10:01:07.161 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
10:01:07.161 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
chunkSize=516, chunksPerPacket=127, packetSize=65557
10:01:07.161 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:47220
from mingtzha sending #15
10:01:07.161 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #15
10:01:07.162 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 47220:
has #15 from 10.241.3.35:6390
10:01:07.162 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.162 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
/user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-1468295212_1
at 10.241.3.35
10:01:07.163 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
src=/user/mingtzha/hbase/hbase.version,
holder=DFSClient_NONMAPREDUCE_-1468295212_1, clientMachine=10.241.3.35,
createParent=true, replication=1, overwrite=true, append=false
10:01:07.163 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
10:01:07.164 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* addFile:
/user/mingtzha/hbase/hbase.version is added
10:01:07.164 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: add
/user/mingtzha/hbase/hbase.version to namespace for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.165 [IPC Server handler 5 on 47220] [34mINFO [0;39m
[1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
10.241.3.35    cmd=create    src=/user/mingtzha/hbase/hbase.version
dst=null    perm=mingtzha:supergroup:rw-r--r--
10:01:07.165 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: create queueTime= 0
procesingTime= 3
10:01:07.165 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #15 from 10.241.3.35:6390
10:01:07.165 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #15 from 10.241.3.35:6390 Wrote 95 bytes.
10:01:07.165 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #15
10:01:07.166 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: create 5
10:01:07.166 [main] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Created version file at
hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase set its version at:7
10:01:07.166 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - DFSClient writeChunk allocating new packet seqno=0,
src=/user/mingtzha/hbase/hbase.version, packetSize=65557,
chunksPerPacket=127, bytesCurBlock=0
10:01:07.166 [Thread-45] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Allocating new block
10:01:07.166 [Thread-45] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #16
10:01:07.167 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #16
10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 47220:
has #16 from 10.241.3.35:6390
10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.167 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.168 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_3421159067449879113_1002 is
added to the in-memory file system
10:01:07.168 [IPC Server handler 9 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
/user/mingtzha/hbase/hbase.version. blk_3421159067449879113_1002
10:01:07.168 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
is persisted
10:01:07.168 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 0
procesingTime= 1
10:01:07.169 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #16 from 10.241.3.35:6390
10:01:07.169 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #16 from 10.241.3.35:6390 Wrote 278 bytes.
10:01:07.169 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #16
10:01:07.170 [Thread-45] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: addBlock 4
10:01:07.170 [Thread-45] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline = 10.241.3.35:41943
10:01:07.170 [Thread-45] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
10.241.3.35:41943
10:01:07.170 [Thread-45] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
Connection refused
10:01:07.170 [Thread-45] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
blk_3421159067449879113_1002
10:01:07.170 [Thread-45] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #17
10:01:07.170 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #17
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 47220:
has #17 from 10.241.3.35:6390
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.abandonBlock: blk_3421159067449879113_1002 of
/user/mingtzha/hbase/hbase.version
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_3421159067449879113_1002of /user/mingtzha/hbase/hbase.version
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_3421159067449879113_1002 is
added to the
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_3421159067449879113_1002 is removed from pendingCreates
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
is persisted
10:01:07.171 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
0 procesingTime= 0
10:01:07.172 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #17 from 10.241.3.35:6390
10:01:07.172 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #17 from 10.241.3.35:6390 Wrote 95 bytes.
10:01:07.172 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #17
10:01:07.172 [Thread-45] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: abandonBlock 2
10:01:07.172 [Thread-45] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
10.241.3.35:41943
10:01:07.172 [Thread-45] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #18
10:01:07.172 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #18
10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220:
has #18 from 10.241.3.35:6390
10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.173 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.173 [IPC Server handler 7 on 47220] [31mWARN [0;39m
[1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
enough replicas, still in need of 1 to reach 1
Not able to place enough replicas
10:01:07.173 [IPC Server handler 7 on 47220] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
10:01:07.174 [IPC Server handler 7 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220,
call addBlock(/user/mingtzha/hbase/hbase.version,
DFSClient_NONMAPREDUCE_-1468295212_1,
[Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@4b9ac7f5) from
10.241.3.35:6390: error: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
10:01:07.174 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #18 from 10.241.3.35:6390
10:01:07.174 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #18 from 10.241.3.35:6390 Wrote 1070 bytes.
10:01:07.174 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #18
10:01:07.175 [Thread-45] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

10:01:07.175 [Thread-45] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
blk_3421159067449879113_1002 bad datanode[0] nodes == null
10:01:07.175 [Thread-45] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
10:01:07.177 [main] [31mWARN [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
version file at hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase,
retrying: java.io.IOException: File /user/mingtzha/hbase/hbase.version
could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

10:01:07.177 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:47220
from mingtzha sending #19
10:01:07.177 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #19
10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 47220:
has #19 from 10.241.3.35:6390
10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
src=/user/mingtzha/hbase/hbase.version, recursive=false
10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version
10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
/user/mingtzha/hbase/hbase.version
10:01:07.178 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.unprotectedDelete: /user/mingtzha/hbase/hbase.version is removed
10:01:07.179 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.LeaseManager [0;39m - LeaseManager.findLease:
prefix=/user/mingtzha/hbase/hbase.version
10:01:07.179 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.LeaseManager [0;39m -
LeaseManager.removeLeaseWithPrefixPath:
entry=/user/mingtzha/hbase/hbase.version=[Lease.  Holder:
DFSClient_NONMAPREDUCE_-1468295212_1, pendingcreates: 1]
10:01:07.179 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version is removed
10:01:07.179 [IPC Server handler 8 on 47220] [34mINFO [0;39m
[1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
10.241.3.35    cmd=delete    src=/user/mingtzha/hbase/hbase.version
dst=null    perm=null
10:01:07.180 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
procesingTime= 2
10:01:07.180 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #19 from 10.241.3.35:6390
10:01:07.180 [IPC Server handler 8 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #19 from 10.241.3.35:6390 Wrote 18 bytes.
10:01:07.180 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #19
10:01:07.180 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: delete 3
10:01:07.180 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
10:01:07.180 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
chunkSize=516, chunksPerPacket=127, packetSize=65557
10:01:07.180 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:47220
from mingtzha sending #20
10:01:07.181 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #20
10:01:07.181 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 47220:
has #20 from 10.241.3.35:6390
10:01:07.181 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.181 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
/user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-1468295212_1
at 10.241.3.35
10:01:07.201 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
src=/user/mingtzha/hbase/hbase.version,
holder=DFSClient_NONMAPREDUCE_-1468295212_1, clientMachine=10.241.3.35,
createParent=true, replication=1, overwrite=true, append=false
10:01:07.201 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
10:01:07.202 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* addFile:
/user/mingtzha/hbase/hbase.version is added
10:01:07.202 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: add
/user/mingtzha/hbase/hbase.version to namespace for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.203 [IPC Server handler 2 on 47220] [34mINFO [0;39m
[1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
10.241.3.35    cmd=create    src=/user/mingtzha/hbase/hbase.version
dst=null    perm=mingtzha:supergroup:rw-r--r--
10:01:07.203 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: create queueTime= 0
procesingTime= 22
10:01:07.204 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #20 from 10.241.3.35:6390
10:01:07.206 [IPC Server handler 2 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #20 from 10.241.3.35:6390 Wrote 95 bytes.
10:01:07.206 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #20
10:01:07.206 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: create 26
10:01:07.207 [main] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Created version file at
hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase set its version at:7
10:01:07.207 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - DFSClient writeChunk allocating new packet seqno=0,
src=/user/mingtzha/hbase/hbase.version, packetSize=65557,
chunksPerPacket=127, bytesCurBlock=0
10:01:07.212 [Thread-46] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Allocating new block
10:01:07.213 [Thread-46] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #21
10:01:07.213 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #21
10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 47220:
has #21 from 10.241.3.35:6390
10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.213 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_-3603230159394873750_1003 is
added to the in-memory file system
10:01:07.214 [IPC Server handler 1 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
/user/mingtzha/hbase/hbase.version. blk_-3603230159394873750_1003
10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
is persisted
10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 0
procesingTime= 1
10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #21 from 10.241.3.35:6390
10:01:07.214 [IPC Server handler 1 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #21 from 10.241.3.35:6390 Wrote 278 bytes.
10:01:07.214 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #21
10:01:07.214 [Thread-46] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: addBlock 2
10:01:07.214 [Thread-46] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline = 10.241.3.35:41943
10:01:07.215 [Thread-46] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
10.241.3.35:41943
10:01:07.215 [Thread-46] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
Connection refused
10:01:07.215 [Thread-46] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
blk_-3603230159394873750_1003
10:01:07.215 [Thread-46] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #22
10:01:07.215 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #22
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 47220:
has #22 from 10.241.3.35:6390
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.abandonBlock: blk_-3603230159394873750_1003 of
/user/mingtzha/hbase/hbase.version
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_-3603230159394873750_1003of /user/mingtzha/hbase/hbase.version
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_-3603230159394873750_1003 is
added to the
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_-3603230159394873750_1003 is removed from pendingCreates
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
is persisted
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
0 procesingTime= 0
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #22 from 10.241.3.35:6390
10:01:07.216 [IPC Server handler 3 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #22 from 10.241.3.35:6390 Wrote 95 bytes.
10:01:07.216 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #22
10:01:07.217 [Thread-46] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: abandonBlock 2
10:01:07.217 [Thread-46] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
10.241.3.35:41943
10:01:07.217 [Thread-46] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #23
10:01:07.217 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #23
10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220:
has #23 from 10.241.3.35:6390
10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.217 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.218 [IPC Server handler 0 on 47220] [31mWARN [0;39m
[1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
enough replicas, still in need of 1 to reach 1
Not able to place enough replicas
10:01:07.218 [IPC Server handler 0 on 47220] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
10:01:07.218 [IPC Server handler 0 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 47220,
call addBlock(/user/mingtzha/hbase/hbase.version,
DFSClient_NONMAPREDUCE_-1468295212_1,
[Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@18c3539e) from
10.241.3.35:6390: error: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
10:01:07.219 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #23 from 10.241.3.35:6390
10:01:07.219 [IPC Server handler 0 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #23 from 10.241.3.35:6390 Wrote 1070 bytes.
10:01:07.219 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #23
10:01:07.219 [Thread-46] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

10:01:07.219 [Thread-46] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
blk_-3603230159394873750_1003 bad datanode[0] nodes == null
10:01:07.219 [Thread-46] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
10:01:07.219 [main] [31mWARN [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
version file at hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase,
retrying: java.io.IOException: File /user/mingtzha/hbase/hbase.version
could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

10:01:07.220 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:47220
from mingtzha sending #24
10:01:07.220 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #24
10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 47220:
has #24 from 10.241.3.35:6390
10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
src=/user/mingtzha/hbase/hbase.version, recursive=false
10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version
10:01:07.220 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
/user/mingtzha/hbase/hbase.version
10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.unprotectedDelete: /user/mingtzha/hbase/hbase.version is removed
10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.LeaseManager [0;39m - LeaseManager.findLease:
prefix=/user/mingtzha/hbase/hbase.version
10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.h.server.namenode.LeaseManager [0;39m -
LeaseManager.removeLeaseWithPrefixPath:
entry=/user/mingtzha/hbase/hbase.version=[Lease.  Holder:
DFSClient_NONMAPREDUCE_-1468295212_1, pendingcreates: 1]
10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
/user/mingtzha/hbase/hbase.version is removed
10:01:07.221 [IPC Server handler 4 on 47220] [34mINFO [0;39m
[1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
10.241.3.35    cmd=delete    src=/user/mingtzha/hbase/hbase.version
dst=null    perm=null
10:01:07.221 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
procesingTime= 1
10:01:07.222 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #24 from 10.241.3.35:6390
10:01:07.222 [IPC Server handler 4 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #24 from 10.241.3.35:6390 Wrote 18 bytes.
10:01:07.222 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #24
10:01:07.222 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: delete 2
10:01:07.222 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
10:01:07.222 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
chunkSize=516, chunksPerPacket=127, packetSize=65557
10:01:07.222 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
[0;39m - IPC Client (47) connection to slc05muw.us.**.com/10.241.3.35:47220
from mingtzha sending #25
10:01:07.222 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #25
10:01:07.222 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 47220:
has #25 from 10.241.3.35:6390
10:01:07.222 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
/user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-1468295212_1
at 10.241.3.35
10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
src=/user/mingtzha/hbase/hbase.version,
holder=DFSClient_NONMAPREDUCE_-1468295212_1, clientMachine=10.241.3.35,
createParent=true, replication=1, overwrite=true, append=false
10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
for 'mingtzha'
10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* addFile:
/user/mingtzha/hbase/hbase.version is added
10:01:07.223 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: add
/user/mingtzha/hbase/hbase.version to namespace for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.224 [IPC Server handler 5 on 47220] [34mINFO [0;39m
[1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
10.241.3.35    cmd=create    src=/user/mingtzha/hbase/hbase.version
dst=null    perm=mingtzha:supergroup:rw-r--r--
10:01:07.224 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: create queueTime= 1
procesingTime= 1
10:01:07.224 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #25 from 10.241.3.35:6390
10:01:07.224 [IPC Server handler 5 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #25 from 10.241.3.35:6390 Wrote 95 bytes.
10:01:07.224 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #25
10:01:07.224 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
- Call: create 2
10:01:07.225 [main] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Created version file at
hdfs://slc05muw.us.**.com:47220/user/mingtzha/hbase set its version at:7
10:01:07.225 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
[0;39m - DFSClient writeChunk allocating new packet seqno=0,
src=/user/mingtzha/hbase/hbase.version, packetSize=65557,
chunksPerPacket=127, bytesCurBlock=0
10:01:07.225 [Thread-47] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Allocating new block
10:01:07.225 [Thread-47] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #26
10:01:07.225 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #26
10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 47220:
has #26 from 10.241.3.35:6390
10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_-686922257401561708_1004 is
added to the in-memory file system
10:01:07.226 [IPC Server handler 9 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* allocateBlock:
/user/mingtzha/hbase/hbase.version. blk_-686922257401561708_1004
10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 1 blocks
is persisted
10:01:07.226 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: addBlock queueTime= 1
procesingTime= 0
10:01:07.227 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #26 from 10.241.3.35:6390
10:01:07.227 [IPC Server handler 9 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #26 from 10.241.3.35:6390 Wrote 278 bytes.
10:01:07.227 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #26
10:01:07.227 [Thread-47] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: addBlock 2
10:01:07.227 [Thread-47] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - pipeline = 10.241.3.35:41943
10:01:07.227 [Thread-47] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Connecting to
10.241.3.35:41943
10:01:07.227 [Thread-47] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Exception in
createBlockOutputStream 10.241.3.35:41943 java.net.ConnectException:
Connection refused
10:01:07.227 [Thread-47] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Abandoning
blk_-686922257401561708_1004
10:01:07.228 [Thread-47] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #27
10:01:07.228 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #27
10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 47220:
has #27 from 10.241.3.35:6390
10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.abandonBlock: blk_-686922257401561708_1004 of
/user/mingtzha/hbase/hbase.version
10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_-686922257401561708_1004of /user/mingtzha/hbase/hbase.version
10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.addFile:
/user/mingtzha/hbase/hbase.version with blk_-686922257401561708_1004 is
added to the
10:01:07.228 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK* abandonBlock:
blk_-686922257401561708_1004 is removed from pendingCreates
10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
FSDirectory.persistBlocks: /user/mingtzha/hbase/hbase.version with 0 blocks
is persisted
10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - Served: abandonBlock queueTime=
0 procesingTime= 1
10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #27 from 10.241.3.35:6390
10:01:07.229 [IPC Server handler 6 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #27 from 10.241.3.35:6390 Wrote 95 bytes.
10:01:07.229 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #27
10:01:07.229 [Thread-47] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
[0;39m - Call: abandonBlock 1
10:01:07.229 [Thread-47] [34mINFO [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Excluding datanode
10.241.3.35:41943
10:01:07.229 [Thread-47] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha sending #28
10:01:07.229 [pool-1-thread-1] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m -  got #28
10:01:07.231 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220:
has #28 from 10.241.3.35:6390
10:01:07.232 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
10:01:07.232 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *BLOCK*
NameNode.addBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.232 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.hdfs.StateChange [0;39m - BLOCK*
getAdditionalBlock: /user/mingtzha/hbase/hbase.version for
DFSClient_NONMAPREDUCE_-1468295212_1
10:01:07.232 [IPC Server handler 7 on 47220] [31mWARN [0;39m
[1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Not able to place
enough replicas, still in need of 1 to reach 1
Not able to place enough replicas
10:01:07.233 [IPC Server handler 7 on 47220] [1;31mERROR [0;39m
[1;35mo.a.h.security.UserGroupInformation [0;39m -
PriviledgedActionException as:mingtzha cause:java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
10:01:07.233 [IPC Server handler 7 on 47220] [34mINFO [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 47220,
call addBlock(/user/mingtzha/hbase/hbase.version,
DFSClient_NONMAPREDUCE_-1468295212_1,
[Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@31c7141c) from
10.241.3.35:6390: error: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
java.io.IOException: File /user/mingtzha/hbase/hbase.version could only be
replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
~[hadoop-core-1.2.1.jar:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
~[hadoop-core-1.2.1.jar:na]
    at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
    at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
~[hadoop-core-1.2.1.jar:na]
10:01:07.233 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #28 from 10.241.3.35:6390
10:01:07.233 [IPC Server handler 7 on 47220] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
responding to #28 from 10.241.3.35:6390 Wrote 1070 bytes.
10:01:07.233 [IPC Client (47) connection to slc05muw.us.**.com/
10.241.3.35:47220 from mingtzha] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
slc05muw.us.**.com/10.241.3.35:47220 from mingtzha got value #28
10:01:07.234 [Thread-47] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

10:01:07.234 [Thread-47] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Error Recovery for
blk_-686922257401561708_1004 bad datanode[0] nodes == null
10:01:07.234 [Thread-47] [31mWARN [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Could not get block
locations. Source file "/user/mingtzha/hbase/hbase.version" - Aborting...
10:01:07.234 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
HBaseTestSample.setup
10:01:07.242 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
HBaseTestSample.testInsert
10:01:07.242 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
HBaseTestSample.testInsert
FAILED CONFIGURATION: @BeforeMethod setup
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

SKIPPED CONFIGURATION: @AfterMethod destroy
SKIPPED: testInsert

===============================================
    Default test
    Tests run: 1, Failures: 0, Skips: 1
    Configuration Failures: 1, Skips: 1
===============================================

10:01:07.264 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
[Default suite]

===============================================
Default suite
Total tests run: 1, Failures: 0, Skips: 1
Configuration Failures: 1, Skips: 1
===============================================

[TestNG] Time taken by org.testng.reporters.XMLReporter@6e6be2c6: 5 ms
[TestNG] Time taken by org.testng.reporters.EmailableReporter2@49ab9d75: 5
ms
[TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 8 ms
[TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@686fe5d3: 7 ms
[TestNG] Time taken by org.testng.reporters.JUnitReportReporter@31b4f6aa: 5
ms
[TestNG] Time taken by org.testng.reporters.jq.Main@5c6ed020: 25 ms
10:01:07.322 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of FileSystem
cache with 2 elements.
10:01:07.330 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
file:///
10:01:07.330 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
file:///
10:01:07.331 [Thread-0] [1;31mERROR [0;39m
[1;35morg.apache.hadoop.hdfs.DFSClient [0;39m - Failed to close file
/user/mingtzha/hbase/hbase.version
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/mingtzha/hbase/hbase.version could only be replicated to 0 nodes,
instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
~[hadoop-core-1.2.1.jar:na]
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
~[hadoop-core-1.2.1.jar:na]
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source) ~[na:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.7.0_45]
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
~[na:1.7.0_45]
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_45]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
~[hadoop-core-1.2.1.jar:na]
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source) ~[na:na]
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
~[hadoop-core-1.2.1.jar:na]
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
~[hadoop-core-1.2.1.jar:na]
10:01:07.332 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
hdfs://slc05muw.us.**.com:47220
10:01:07.332 [Thread-0] [39mDEBUG [0;39m
[1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache


Mingtao


On Sat, Aug 2, 2014 at 9:57 AM, Matteo Bertozzi <th...@gmail.com>
wrote:

> The error says: "failed to create file /user/mingtzha/hbase/hbase.version
> on client 10.241.3.35. Requested replication 0 is less than the required
> minimum 1"
>
> The 0 comes from here: utility.startMiniCluster(0);
>
> Matteo
>
>
>
> On Sat, Aug 2, 2014 at 5:51 PM, Mingtao Zhang <ma...@gmail.com>
> wrote:
>
> > HI,
> >
> > I am really stuck with this. Putting the stack trace, java file,
> hbase-site
> > file and pom file here.
> >
> > I have 0 knowledge about hadoop and expecting it's transparent for my
> > integration test :(.
> >
> > Thanks in advance!
> >
> > Best Regards,
> > Mingtao
> >
> > The stack trace:
> >
> > 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
> > http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Can't
> > exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
> > 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd -->
> >
> >
> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
> > 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Container Server@9f51be6 +
> > org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
> > sessionIdManager
> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Init
> > SecureRandom.
> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
> > 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.HashSessionManager@738f651f
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > filterNameMap={safety=safety, krb5Filter=krb5Filter}
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > pathFilters=[(F=safety,[/*],[],15)]
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletFilterMap=null
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
> > /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
> > /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
> > /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp, /data/*=data,
> > /contentSummary/*=contentSummary,
> > /renewDelegationToken=renewDelegationToken,
> > /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
> > *.JSP=jsp, /jmx=jmx}
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp, jmx=jmx,
> > data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
> > cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
> > default=default, logLevel=logLevel, contentSummary=contentSummary,
> > getimage=getimage, renewDelegationToken=renewDelegationToken}
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ServletHandler@3fd5e2ae
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ServletHandler@3fd5e2ae
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting SecurityHandler@51f35aea
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > SecurityHandler@51f35aea
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting SessionHandler@73152e3f
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > SessionHandler@73152e3f
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >
> >
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ErrorPageErrorHandler@4b38117e
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ErrorPageErrorHandler@4b38117e
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class
> > org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> > from sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class
> > org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > krb5Filter
> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
> > sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> > 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > safety
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > conf
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > cancelDelegationToken
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > contentSummary
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > checksum
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > data
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > fsck
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > getDelegationToken
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > getimage
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > listPaths
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > renewDelegationToken
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > stacks
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > jmx
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > logLevel
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.apache.jasper.servlet.JspServlet from
> > sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.jasper.servlet.JspServlet
> > 09:42:33.250 [main] [39mDEBUG [0;39m
> [1;35mo.a.j.compiler.JspRuntimeContext
> > [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext
> ([])
> > / sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.252 [main] [39mDEBUG [0;39m
> > [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch dir
> > for the JSP engine is:
> > /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
> > 09:42:33.252 [main] [39mDEBUG [0;39m
> > [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT:
> Do
> > not modify the generated servlets
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > jsp
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.mortbay.jetty.servlet.DefaultServlet
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.mortbay.jetty.servlet.DefaultServlet from
> > sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.mortbay.jetty.servlet.DefaultServlet
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.ResourceCache@5b525b5f
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > resource base =
> > file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > default
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >
> >
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Container org.mortbay.jetty.servlet.Context@4e048dc6
> >
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> > + ErrorHandler@7bece8cf as errorHandler
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > filterNameMap={safety=safety}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > pathFilters=[(F=safety,[/*],[],15)]
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletFilterMap=null
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ServletHandler@cf7ea2e
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ServletHandler@cf7ea2e
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting org.mortbay.jetty.servlet.Context@4e048dc6
> >
> >
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ErrorHandler@7bece8cf
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ErrorHandler@7bece8cf
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > safety
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.AdminAuthorizedServlet
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.Context@4e048dc6
> >
> >
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Container org.mortbay.jetty.servlet.Context@6e4f7806
> >
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> > + ErrorHandler@7ea8ad98 as errorHandler
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > filterNameMap={safety=safety}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > pathFilters=[(F=safety,[/*],[],15)]
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletFilterMap=null
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ServletHandler@23510a7e
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ServletHandler@23510a7e
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting org.mortbay.jetty.servlet.Context@6e4f7806
> >
> >
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ErrorHandler@7ea8ad98
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ErrorHandler@7ea8ad98
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > safety
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.mortbay.jetty.servlet.DefaultServlet
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.DefaultServlet-1788226358
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.Context@6e4f7806
> >
> >
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ContextHandlerCollection@5a4950dd
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ContextHandlerCollection@5a4950dd
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting Server@9f51be6
> > 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.nio.SelectChannelConnector$1@501a7f06
> > 09:42:33.272 [main] [34mINFO [0;39m [1;35morg.mortbay.log [0;39m -
> Started
> > SelectChannelConnector@localhost.localdomain:1543
> > 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > SelectChannelConnector@localhost.localdomain:1543
> > 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > Server@9f51be6
> > 09:42:33.273 [main] [34mINFO [0;39m
> > [1;35mo.a.h.hdfs.server.namenode.NameNode [0;39m - Web-server up at:
> > localhost.localdomain:1543
> > 09:42:33.274 [IPC Server listener on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> > starting
> > 09:42:33.274 [IPC Server Responder] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> starting
> > 09:42:33.275 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118:
> > starting
> > 09:42:33.276 [IPC Server handler 1 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on
> 41118:
> > starting
> > 09:42:33.277 [IPC Server handler 3 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on
> 41118:
> > starting
> > 09:42:33.277 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on
> 41118:
> > starting
> > 09:42:33.277 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 5 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 7 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
> 41118:
> > starting
> > 09:42:33.283 [IPC Server handler 9 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on
> 41118:
> > starting
> > 09:42:33.287 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem
> > [0;39m - Creating filesystem for hdfs://slc05muw.us.**.com:41118
> > 09:42:33.321 [main] [39mDEBUG [0;39m
> > [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> > multipleLinearRandomRetry = null
> > 09:42:33.328 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - The ping interval is60000ms.
> > 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - Use SIMPLE authentication for protocol ClientProtocol
> > 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - Connecting to slc05muw.us.**.com/10.241.3.35:41118
> > 09:42:33.337 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #0
> > 09:42:33.337 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: starting, having
> > connections 1
> > 09:42:33.337 [IPC Server listener on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Server connection from
> > 10.241.3.35:24701; # active connections: 1; # queued calls: 0
> > 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Successfully authorized
> > org.apache.hadoop.hdfs.protocol.ClientProtocol-mingtzha
> > 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #0
> > 09:42:33.338 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118:
> > has #0 from 10.241.3.35:24701
> > 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> > queueTime= 1 procesingTime= 0
> > 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #0 from 10.241.3.35:24701
> > 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #0 from 10.241.3.35:24701 Wrote 22 bytes.
> > 09:42:33.340 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #0
> > 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: getProtocolVersion 17
> > 09:42:33.341 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Short circuit read is false
> > 09:42:33.341 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Connect to datanode via hostname is false
> > 09:42:33.343 [main] [39mDEBUG [0;39m
> > [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> > multipleLinearRandomRetry = null
> > 09:42:33.343 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #1
> > 09:42:33.344 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #1
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on
> 41118:
> > has #1 from 10.241.3.35:24701
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> > queueTime= 0 procesingTime= 0
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #1 from 10.241.3.35:24701
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #1 from 10.241.3.35:24701 Wrote 22 bytes.
> > 09:42:33.344 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #1
> > 09:42:33.344 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: getProtocolVersion 1
> > 09:42:33.345 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Short circuit read is false
> > 09:42:33.345 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Connect to datanode via hostname is false
> > 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #2
> > 09:42:33.345 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #2
> > 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on
> 41118:
> > has #2 from 10.241.3.35:24701
> > 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning fetched groups
> > for 'mingtzha'
> > 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getDatanodeReport
> > queueTime= 0 procesingTime= 11
> > 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #2 from 10.241.3.35:24701
> > 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #2 from 10.241.3.35:24701 Wrote 61 bytes.
> > 09:42:33.357 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #2
> > 09:42:33.357 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: getDatanodeReport 12
> > Cluster is active
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52
> GMT
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> > host.name=slc05muw.us.**.com
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.version=1.7.0_45
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.vendor=** Corporation
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.home=/scratch/mingtzha/jdk1.7.0_45/jre
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> >
> >
> environment:java.class.path=/scratch/mingtzha/eclipses/eclipse/plugins/org.testng.eclipse_6.8.6.20141201_2240/lib/testng.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-integ/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-core/target/classes:/home/mingtzha/.m2/repository/org/testng/testng/6.8.7/testng-6.8.7.jar:/home/mingtzha/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/mingtzha/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/mingtzha/.m2/repository/org/beanshell/bsh/2.0b4/bsh-2.0b4.jar:/home/mingtzha/.m2/repository/com/beust/jcommander/1.27/jcommander-1.27.jar:/home/mingtzha/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/mingtzha/.m2/repository/org/assertj/assertj-core/1.5.0/assertj-core-1.5.0.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-testng/2.3.0-b01/hk2-testng-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2/2.3.0-b01/hk2-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/config-types/2.3.0-b01/config-types-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/core/2.3.0-b01/core-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-config/2.3.0-b01/hk2-config-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/jvnet/tiger-types/1.4/tiger-types-1.4.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/bean-validator/2.3.0-b01/bean-validator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-runlevel/2.3.0-b01/hk2-runlevel-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/class-model/2.3.0-b01/class-model-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/asm-all-repackaged/2.3.0-b01/asm-all-repackaged-2.3.0-b01.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-core/target/classes:/home/mingtzha/.m2/repository/org/yaml/snakeyaml/1.13/snakeyaml-1.13.jar:/home/mingtzha/.m2/repository/org/apache/kafka/kafka_2.10/0.8.0/kafka_2.10-0.8.0.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-library/2.10.1/scala-library-2.10.1.jar:/home/mingtzha/.m2/repository/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-compiler/2.10.1/scala-compiler-2.10.1.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-reflect/2.10.1/scala-reflect-2.10.1.jar:/home/mingtzha/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/mingtzha/.m2/repository/org/xerial/snappy/snappy-java/
> >
> >
> 1.0.4.1/snappy-java-1.0.4.1.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-annotation/2.2.0/metrics-annotation-2.2.0.jar:/home/mingtzha/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/itest-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-model/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-api/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-data/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-avro/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-dev/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-shared/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-spi/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-common/target/classes:/home/mingtzha/.m2/repository/com/googlecode/owasp-java-html-sanitizer/owasp-java-html-sanitizer/r209/owasp-java-html-sanitizer-r209.jar:/home/mingtzha/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/mingtzha/.m2/repository/com/fasterxml/uuid/java-uuid-generator/3.1.3/java-uuid-generator-3.1.3.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-hbase/target/classes:/home/mingtzha/.m2/repository/org/apache/avro/avro/1.7.5/avro-1.7.5.jar:/home/mingtzha/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/mingtzha/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.15/hbase-0.94.15.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.21/hbase-0.94.21-tests.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-core/2.1.2/metrics-core-2.1.2.jar:/home/mingtzha/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/mingtzha/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/mingtzha/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/mingtzha/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/mingtzha/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/mingtzha/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/mingtzha/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/mingtzha/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/mingtzha/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/mingtzha/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/mingtzha/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/mingtzha/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/mingtzha/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/home/mingtzha/.m2/repository/org/apache/thrift/libthrift/0.8.0/libthrift-0.8.0.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpclient/4.1.2/httpclient-4.1.2.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar:/home/mingtzha/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.8/jackson-xc-1.8.8.jar:/home/mingtzha/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/mingtzha/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/mingtzha/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/mingtzha/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-core/1.8/jersey-core-1.8.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-json/1.8/jersey-json-1.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/mingtzha/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar:/home/mingtzha/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/mingtzha/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/mingtzha/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/mingtzha/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar:/home/mingtzha/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/mingtzha/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/mingtzha/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/mingtzha/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/mingtzha/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/mingtzha/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/mingtzha/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-test/1.2.1/hadoop-test-1.2.1.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-ext/1.7.5/slf4j-ext-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/cal10n/cal10n-api/0.7.4/cal10n-api-0.7.4.jar:/home/mingtzha/.m2/repository/org/slf4j/jcl-over-slf4j/1.7.5/jcl-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/log4j-over-slf4j/1.7.5/log4j-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/jul-to-slf4j/1.7.5/jul-to-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-classic/1.0.13/logback-classic-1.0.13.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-core/1.0.13/logback-core-1.0.13.jar:/home/mingtzha/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/mingtzha/.m2/repository/org/fusesource/jansi/jansi/1.11/jansi-1.11.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/config-zookeeper/target/classes:/home/mingtzha/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/mingtzha/.m2/repository/joda-time/joda-time/2.3/joda-time-2.3.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-locator/2.3.0-b01/hk2-locator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/javax.inject/2.3.0-b01/javax.inject-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/aopalliance-repackaged/2.3.0-b01/aopalliance-repackaged-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-api/2.3.0-b01/hk2-api-2.3.0-b01.jar:/home/mingtzha/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-utils/2.3.0-b01/hk2-utils-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> >
> >
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.io.tmpdir=/tmp
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.compiler=<NA>
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> > os.name=Linux
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:os.arch=amd64
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:os.version=2.6.39-300.20.1.el6uek.x86_64
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> > user.name=mingtzha
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:user.home=/home/mingtzha
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> >
> >
> environment:user.dir=/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest
> > 09:42:33.380 [main] [39mDEBUG [0;39m
> > [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Opening
> >
> >
> datadir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
> >
> >
> snapDir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
> > 09:42:33.394 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Created server with
> > tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir
> >
> >
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> > snapdir
> >
> >
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> > 09:42:33.400 [main] [34mINFO [0;39m
> [1;35mo.a.z.server.NIOServerCnxnFactory
> > [0;39m - binding to port 0.0.0.0/0.0.0.0:51126
> > 09:42:33.405 [main] [34mINFO [0;39m
> > [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Snapshotting: 0x0 to
> >
> >
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2/snapshot.0
> > 09:42:33.431 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO
> [0;39m
> > [1;35mo.a.z.server.NIOServerCnxnFactory [0;39m - Accepted socket
> connection
> > from /10.241.3.35:44625
> > 09:42:33.437 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO
> [0;39m
> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Processing stat command
> > from /10.241.3.35:44625
> > 09:42:33.442 [Thread-25] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Stat command output
> > 09:42:33.442 [Thread-25] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Closed socket
> connection
> > for client /10.241.3.35:44625 (no session established for client)
> > 09:42:33.442 [main] [34mINFO [0;39m [1;35mo.a.h.h.z.MiniZooKeeperCluster
> > [0;39m - Started MiniZK Cluster and connect 1 ZK server on client port:
> > 51126
> > 09:42:33.443 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase: masked=rwxr-xr-x
> > 09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #3
> > 09:42:33.444 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #3
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on
> 41118:
> > has #3 from 10.241.3.35:24701
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.mkdirs:
> > /user/mingtzha/hbase
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* mkdirs:
> > /user/mingtzha/hbase
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.mkdirs:
> > created directory /user
> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.mkdirs:
> > created directory /user/mingtzha
> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.mkdirs:
> > created directory /user/mingtzha/hbase
> > 09:42:33.448 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> > bytes at the end of the edit log (offset 4)
> > 09:42:33.452 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> > bytes at the end of the edit log (offset 4)
> > 09:42:33.455 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> > [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> > 10.241.3.35    cmd=mkdirs    src=/user/mingtzha/hbase    dst=null
> > perm=mingtzha:supergroup:rwxr-xr-x
> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: mkdirs queueTime= 0
> > procesingTime= 10
> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #3 from 10.241.3.35:24701
> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #3 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.455 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #3
> > 09:42:33.455 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: mkdirs 12
> > 09:42:33.461 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.468 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.469 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #4
> > 09:42:33.469 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #4
> > 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
> 41118:
> > has #4 from 10.241.3.35:24701
> > 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.479 [IPC Server handler 2 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.480 [IPC Server handler 2 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.480 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #4 from 10.241.3.35:24701
> > 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #4 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.482 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #4
> > 09:42:33.482 [main] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> > retrying: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> > 09:42:33.483 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #5
> > 09:42:33.483 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #5
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on
> 41118:
> > has #5 from 10.241.3.35:24701
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> > src=/user/mingtzha/hbase/hbase.version, recursive=false
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> > FSDirectory.unprotectedDelete: failed to remove
> > /user/mingtzha/hbase/hbase.version because it does not exist
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> > procesingTime= 1
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #5 from 10.241.3.35:24701
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #5 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.484 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #5
> > 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: delete 1
> > 09:42:33.484 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.484 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.485 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #6
> > 09:42:33.485 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #6
> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
> 41118:
> > has #6 from 10.241.3.35:24701
> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.486 [IPC Server handler 6 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.486 [IPC Server handler 6 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.486 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #6 from 10.241.3.35:24701
> > 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #6 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.487 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #6
> > 09:42:33.487 [main] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> > retrying: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> > 09:42:33.487 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #7
> > 09:42:33.488 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #7
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on
> 41118:
> > has #7 from 10.241.3.35:24701
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> > src=/user/mingtzha/hbase/hbase.version, recursive=false
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> > FSDirectory.unprotectedDelete: failed to remove
> > /user/mingtzha/hbase/hbase.version because it does not exist
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> > procesingTime= 0
> > 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #7 from 10.241.3.35:24701
> > 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #7 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.489 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #7
> > 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: delete 2
> > 09:42:33.489 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.489 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #8
> > 09:42:33.489 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #8
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
> 41118:
> > has #8 from 10.241.3.35:24701
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.491 [IPC Server handler 8 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.491 [IPC Server handler 8 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.491 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #8 from 10.241.3.35:24701
> > 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #8 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.492 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #8
> > 09:42:33.492 [main] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> > retrying: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> > 09:42:33.492 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #9
> > 09:42:33.493 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #9
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on
> 41118:
> > has #9 from 10.241.3.35:24701
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> > src=/user/mingtzha/hbase/hbase.version, recursive=false
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> > FSDirectory.unprotectedDelete: failed to remove
> > /user/mingtzha/hbase/hbase.version because it does not exist
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> > procesingTime= 0
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #9 from 10.241.3.35:24701
> > 09:42:33.494 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #9 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.494 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #9
> > 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: delete 2
> > 09:42:33.494 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.494 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #10
> > 09:42:33.494 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #10
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118:
> > has #10 from 10.241.3.35:24701
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.495 [IPC Server handler 0 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.495 [IPC Server handler 0 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.496 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #10 from 10.241.3.35:24701
> > 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #10 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.497 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #10
> > 09:42:33.497 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> > HBaseTestSample.setup
> > 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
> > HBaseTestSample.testInsert
> > 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> > HBaseTestSample.testInsert
> > FAILED CONFIGURATION: @BeforeMethod setup
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> >     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
> >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >     at com.sun.proxy.$Proxy10.create(Unknown Source)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >     at com.sun.proxy.$Proxy10.create(Unknown Source)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3451)
> >     at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:870)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:205)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
> >     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
> >     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:360)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:774)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:646)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:628)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:576)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:563)
> >     at
> >
> >
> com.**.sites.analytics.repository.itest.endeca.HBaseTestSample.setup(HBaseTestSample.java:101)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at
> >
> >
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> >     at
> >
> >
> org.testng.internal.MethodInvocationHelper$2.runConfigurationMethod(MethodInvocationHelper.java:292)
> >     at
> >
> >
> org.jvnet.testing.hk2testng.HK2TestListenerAdapter.run(HK2TestListenerAdapter.java:97)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at
> >
> >
> org.testng.internal.MethodInvocationHelper.invokeConfigurable(MethodInvocationHelper.java:304)
> >     at
> > org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:556)
> >     at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
> >     at org.testng.internal.Invoker.invokeMethod(Invoker.java:653)
> >     at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> >     at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> >     at
> >
> >
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> >     at
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> >     at org.testng.TestRunner.privateRun(TestRunner.java:767)
> >     at org.testng.TestRunner.run(TestRunner.java:617)
> >     at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> >     at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
> >     at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> >     at org.testng.SuiteRunner.run(SuiteRunner.java:240)
> >     at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> >     at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> >     at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> >     at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> >     at org.testng.TestNG.run(TestNG.java:1057)
> >     at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> >     at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> >     at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> >
> > SKIPPED CONFIGURATION: @AfterMethod destroy
> > SKIPPED: testInsert
> >
> > ===============================================
> >     Default test
> >     Tests run: 1, Failures: 0, Skips: 1
> >     Configuration Failures: 1, Skips: 1
> > ===============================================
> >
> > 09:42:33.535 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
> > [Default suite]
> >
> > ===============================================
> > Default suite
> > Total tests run: 1, Failures: 0, Skips: 1
> > Configuration Failures: 1, Skips: 1
> > ===============================================
> >
> > [TestNG] Time taken by org.testng.reporters.XMLReporter@71aeef97: 6 ms
> > [TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 4 ms
> > [TestNG] Time taken by org.testng.reporters.jq.Main@2b430201: 24 ms
> > [TestNG] Time taken by org.testng.reporters.JUnitReportReporter@3309b429
> :
> > 4
> > ms
> > [TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@7224eaaa:
> 8
> > ms
> > [TestNG] Time taken by org.testng.reporters.EmailableReporter2@53b74706:
> 3
> > ms
> > 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of
> FileSystem
> > cache with 1 elements.
> > 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - Stopping client
> > 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: closed
> > 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: stopped, remaining
> > connections 0
> > 09:42:33.589 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> > disconnecting client 10.241.3.35:24701. Number of active connections: 1
> > 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
> > hdfs://slc05muw.us.**.com:41118
> > 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache
> >
> > The java code:
> >
> > import java.io.BufferedReader;
> > import java.io.InputStreamReader;
> >
> > import org.apache.hadoop.conf.Configuration;
> > import org.apache.hadoop.hbase.HBaseConfiguration;
> > import org.apache.hadoop.hbase.HBaseTestingUtility;
> > import org.apache.hadoop.hbase.client.Delete;
> > import org.apache.hadoop.hbase.client.Get;
> > import org.apache.hadoop.hbase.client.HTable;
> > import org.apache.hadoop.hbase.client.Put;
> > import org.apache.hadoop.hbase.client.Result;
> > import org.apache.hadoop.hbase.util.Bytes;
> > import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
> > import org.testng.annotations.AfterMethod;
> > import org.testng.annotations.BeforeMethod;
> > import org.testng.annotations.Test;
> >
> > public class HBaseTestSample {
> >
> >     private static HBaseTestingUtility utility;
> >     byte[] CF = "CF".getBytes();
> >     byte[] QUALIFIER = "CQ-1".getBytes();
> >
> >     @BeforeMethod
> >     public void setup() throws Exception {
> >         Configuration hbaseConf = HBaseConfiguration.create();
> >
> >         utility = new HBaseTestingUtility(hbaseConf);
> >
> >         Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
> >         BufferedReader br = new BufferedReader(new InputStreamReader(
> >                 process.getInputStream()));
> >         int rc = process.waitFor();
> >         if (rc == 0) {
> >             String umask = br.readLine();
> >
> >             int umaskBits = Integer.parseInt(umask, 8);
> >             int permBits = 0777 & ~umaskBits;
> >             String perms = Integer.toString(permBits, 8);
> >
> >             utility.getConfiguration().set("dfs.datanode.data.dir.perm",
> > perms);
> >         }
> >
> >         utility.startMiniCluster(0);
> >
> >     }
> >
> >     @Test
> >     public void testInsert() throws Exception {
> >         HTable table = utility.createTable(CF, QUALIFIER);
> >
> >         System.out.println("create table t-f");
> >
> >         // byte [] family, byte [] qualifier, byte [] value
> >         table.put(new Put("r".getBytes()).add("f".getBytes(),
> > "c1".getBytes(),
> >                 "v".getBytes()));
> >         Result result = table.get(new Get("r".getBytes()));
> >
> >         System.out.println(result.list().size());
> >
> >         table.delete(new Delete("r".getBytes()));
> >
> >         System.out.println("clean up");
> >
> >     }
> >
> >     @AfterMethod
> >     public void destroy() throws Exception {
> >         utility.cleanupTestDir();
> >     }
> > }
> >
> > hbase-site.xml:
> >
> > <?xml version="1.0"?>
> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > <configuration>
> >     <property>
> >         <name>hbase.rootdir</name>
> >         <value>file:///scratch/mingtzha/hbase/test</value>
> >     </property>
> >     <property>
> >         <name>hbase.tmp.dir</name>
> >         <value>/tmp/hbase</value>
> >     </property>
> >
> >     <property>
> >         <name>hbase.zookeeper.quorum</name>
> >         <value>localhost</value>
> >     </property>
> >     <property>
> >         <name>hbase.cluster.distributed</name>
> >         <value>true</value>
> >     </property>
> >     <property>
> >         <name>hbase.ipc.warn.response.time</name>
> >         <value>1</value>
> >     </property>
> >
> >     <!-- http://hbase.apache.org/book/ops.monitoring.html -->
> >     <!-- -1 => Disable logging by size -->
> >     <!-- <property> -->
> >     <!-- <name>hbase.ipc.warn.response.size</name> -->
> >     <!-- <value>-1</value> -->
> >     <!-- </property> -->
> > </configuration>
> >
> > pom.xml
> >
> > <?xml version="1.0" encoding="UTF-8"?>
> > <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="
> > http://www.w3.org/2001/XMLSchema-instance"
> >     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
> > http://maven.apache.org/xsd/maven-4.0.0.xsd">
> >     <modelVersion>4.0.0</modelVersion>
> >     <parent>
> >         <groupId>com.**.sites.analytics.tests</groupId>
> >         <artifactId>integration-test</artifactId>
> >         <version>1.0-SNAPSHOT</version>
> >     </parent>
> >
> >     <artifactId>repository-itest</artifactId>
> >     <name>repository-itest</name>
> >
> >     <dependencies>
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>test-integ</artifactId>
> >             <version>${project.version}</version>
> >             <scope>test</scope>
> >         </dependency>
> >         <dependency>
> >             <groupId>com.**.sites.analytics.tests</groupId>
> >             <artifactId>itest-core</artifactId>
> >             <version>${project.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>config-dev</artifactId>
> >             <version>${project.version}</version>
> >             <scope>test</scope>
> >         </dependency>
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>repository-core</artifactId>
> >             <version>${project.version}</version>
> >         </dependency>
> >
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>repository-hbase</artifactId>
> >             <version>${project.version}</version>
> >         </dependency>
> >
> >         <dependency>
> >             <groupId>org.apache.hbase</groupId>
> >             <artifactId>hbase</artifactId>
> >             <version>0.94.21</version>
> >             <classifier>tests</classifier>
> >             <exclusions>
> >                 <exclusion>
> >                     <artifactId>slf4j-log4j12</artifactId>
> >                     <groupId>org.slf4j</groupId>
> >                 </exclusion>
> >             </exclusions>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.hadoop</groupId>
> >             <artifactId>hadoop-core</artifactId>
> >             <version>1.2.1</version>
> >             <exclusions>
> >                 <exclusion>
> >                     <artifactId>slf4j-log4j12</artifactId>
> >                     <groupId>org.slf4j</groupId>
> >                 </exclusion>
> >             </exclusions>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.hadoop</groupId>
> >             <artifactId>hadoop-test</artifactId>
> >             <version>1.2.1</version>
> >             <exclusions>
> >                 <exclusion>
> >                     <artifactId>slf4j-log4j12</artifactId>
> >                     <groupId>org.slf4j</groupId>
> >                 </exclusion>
> >             </exclusions>
> >         </dependency>
> >     </dependencies>
> > </project>
> >
>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Matteo Bertozzi <th...@gmail.com>.
The error says: "failed to create file /user/mingtzha/hbase/hbase.version
on client 10.241.3.35. Requested replication 0 is less than the required
minimum 1"

The 0 comes from here: utility.startMiniCluster(0);

Matteo



On Sat, Aug 2, 2014 at 5:51 PM, Mingtao Zhang <ma...@gmail.com>
wrote:

> HI,
>
> I am really stuck with this. Putting the stack trace, java file, hbase-site
> file and pom file here.
>
> I have 0 knowledge about hadoop and expecting it's transparent for my
> integration test :(.
>
> Thanks in advance!
>
> Best Regards,
> Mingtao
>
> The stack trace:
>
> 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
> http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Can't
> exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
> 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd -->
>
> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
> 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Container Server@9f51be6 +
> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
> sessionIdManager
> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Init
> SecureRandom.
> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
> 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.HashSessionManager@738f651f
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> filterNameMap={safety=safety, krb5Filter=krb5Filter}
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> pathFilters=[(F=safety,[/*],[],15)]
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletFilterMap=null
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
> /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
> /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
> /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp, /data/*=data,
> /contentSummary/*=contentSummary,
> /renewDelegationToken=renewDelegationToken,
> /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
> *.JSP=jsp, /jmx=jmx}
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp, jmx=jmx,
> data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
> cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
> default=default, logLevel=logLevel, contentSummary=contentSummary,
> getimage=getimage, renewDelegationToken=renewDelegationToken}
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ServletHandler@3fd5e2ae
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ServletHandler@3fd5e2ae
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting SecurityHandler@51f35aea
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> SecurityHandler@51f35aea
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting SessionHandler@73152e3f
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> SessionHandler@73152e3f
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ErrorPageErrorHandler@4b38117e
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ErrorPageErrorHandler@4b38117e
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class
> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> from sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class
> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> krb5Filter
> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
> sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> safety
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> conf
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> cancelDelegationToken
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> contentSummary
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> checksum
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> data
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> fsck
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> getDelegationToken
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> getimage
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> listPaths
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> renewDelegationToken
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> stacks
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> jmx
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> logLevel
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.apache.jasper.servlet.JspServlet from
> sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.jasper.servlet.JspServlet
> 09:42:33.250 [main] [39mDEBUG [0;39m [1;35mo.a.j.compiler.JspRuntimeContext
> [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext([])
> / sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.252 [main] [39mDEBUG [0;39m
> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch dir
> for the JSP engine is:
> /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
> 09:42:33.252 [main] [39mDEBUG [0;39m
> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT: Do
> not modify the generated servlets
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> jsp
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.mortbay.jetty.servlet.DefaultServlet
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.mortbay.jetty.servlet.DefaultServlet from
> sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.mortbay.jetty.servlet.DefaultServlet
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.ResourceCache@5b525b5f
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> resource base =
> file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> default
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Container org.mortbay.jetty.servlet.Context@4e048dc6
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> + ErrorHandler@7bece8cf as errorHandler
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> filterNameMap={safety=safety}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> pathFilters=[(F=safety,[/*],[],15)]
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletFilterMap=null
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ServletHandler@cf7ea2e
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ServletHandler@cf7ea2e
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting org.mortbay.jetty.servlet.Context@4e048dc6
>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ErrorHandler@7bece8cf
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ErrorHandler@7bece8cf
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> safety
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.AdminAuthorizedServlet
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.Context@4e048dc6
>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Container org.mortbay.jetty.servlet.Context@6e4f7806
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> + ErrorHandler@7ea8ad98 as errorHandler
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> filterNameMap={safety=safety}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> pathFilters=[(F=safety,[/*],[],15)]
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletFilterMap=null
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ServletHandler@23510a7e
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ServletHandler@23510a7e
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting org.mortbay.jetty.servlet.Context@6e4f7806
>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ErrorHandler@7ea8ad98
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ErrorHandler@7ea8ad98
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> safety
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.mortbay.jetty.servlet.DefaultServlet
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.DefaultServlet-1788226358
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.Context@6e4f7806
>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ContextHandlerCollection@5a4950dd
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ContextHandlerCollection@5a4950dd
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting Server@9f51be6
> 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.nio.SelectChannelConnector$1@501a7f06
> 09:42:33.272 [main] [34mINFO [0;39m [1;35morg.mortbay.log [0;39m - Started
> SelectChannelConnector@localhost.localdomain:1543
> 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> SelectChannelConnector@localhost.localdomain:1543
> 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> Server@9f51be6
> 09:42:33.273 [main] [34mINFO [0;39m
> [1;35mo.a.h.hdfs.server.namenode.NameNode [0;39m - Web-server up at:
> localhost.localdomain:1543
> 09:42:33.274 [IPC Server listener on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> starting
> 09:42:33.274 [IPC Server Responder] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder: starting
> 09:42:33.275 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
> starting
> 09:42:33.276 [IPC Server handler 1 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 41118:
> starting
> 09:42:33.277 [IPC Server handler 3 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 41118:
> starting
> 09:42:33.277 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 41118:
> starting
> 09:42:33.277 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 5 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 7 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118:
> starting
> 09:42:33.283 [IPC Server handler 9 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 41118:
> starting
> 09:42:33.287 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.fs.FileSystem
> [0;39m - Creating filesystem for hdfs://slc05muw.us.**.com:41118
> 09:42:33.321 [main] [39mDEBUG [0;39m
> [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> multipleLinearRandomRetry = null
> 09:42:33.328 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - The ping interval is60000ms.
> 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - Use SIMPLE authentication for protocol ClientProtocol
> 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - Connecting to slc05muw.us.**.com/10.241.3.35:41118
> 09:42:33.337 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #0
> 09:42:33.337 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: starting, having
> connections 1
> 09:42:33.337 [IPC Server listener on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Server connection from
> 10.241.3.35:24701; # active connections: 1; # queued calls: 0
> 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Successfully authorized
> org.apache.hadoop.hdfs.protocol.ClientProtocol-mingtzha
> 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #0
> 09:42:33.338 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
> has #0 from 10.241.3.35:24701
> 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> queueTime= 1 procesingTime= 0
> 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #0 from 10.241.3.35:24701
> 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #0 from 10.241.3.35:24701 Wrote 22 bytes.
> 09:42:33.340 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #0
> 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: getProtocolVersion 17
> 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Short circuit read is false
> 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Connect to datanode via hostname is false
> 09:42:33.343 [main] [39mDEBUG [0;39m
> [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> multipleLinearRandomRetry = null
> 09:42:33.343 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #1
> 09:42:33.344 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #1
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 41118:
> has #1 from 10.241.3.35:24701
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> queueTime= 0 procesingTime= 0
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #1 from 10.241.3.35:24701
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #1 from 10.241.3.35:24701 Wrote 22 bytes.
> 09:42:33.344 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #1
> 09:42:33.344 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: getProtocolVersion 1
> 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Short circuit read is false
> 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Connect to datanode via hostname is false
> 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #2
> 09:42:33.345 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #2
> 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 41118:
> has #2 from 10.241.3.35:24701
> 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning fetched groups
> for 'mingtzha'
> 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getDatanodeReport
> queueTime= 0 procesingTime= 11
> 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #2 from 10.241.3.35:24701
> 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #2 from 10.241.3.35:24701 Wrote 61 bytes.
> 09:42:33.357 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #2
> 09:42:33.357 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: getDatanodeReport 12
> Cluster is active
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> host.name=slc05muw.us.**.com
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.version=1.7.0_45
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.vendor=** Corporation
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.home=/scratch/mingtzha/jdk1.7.0_45/jre
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>
> environment:java.class.path=/scratch/mingtzha/eclipses/eclipse/plugins/org.testng.eclipse_6.8.6.20141201_2240/lib/testng.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-integ/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-core/target/classes:/home/mingtzha/.m2/repository/org/testng/testng/6.8.7/testng-6.8.7.jar:/home/mingtzha/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/mingtzha/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/mingtzha/.m2/repository/org/beanshell/bsh/2.0b4/bsh-2.0b4.jar:/home/mingtzha/.m2/repository/com/beust/jcommander/1.27/jcommander-1.27.jar:/home/mingtzha/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/mingtzha/.m2/repository/org/assertj/assertj-core/1.5.0/assertj-core-1.5.0.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-testng/2.3.0-b01/hk2-testng-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2/2.3.0-b01/hk2-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/config-types/2.3.0-b01/config-types-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/core/2.3.0-b01/core-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-config/2.3.0-b01/hk2-config-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/jvnet/tiger-types/1.4/tiger-types-1.4.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/bean-validator/2.3.0-b01/bean-validator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-runlevel/2.3.0-b01/hk2-runlevel-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/class-model/2.3.0-b01/class-model-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/asm-all-repackaged/2.3.0-b01/asm-all-repackaged-2.3.0-b01.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-core/target/classes:/home/mingtzha/.m2/repository/org/yaml/snakeyaml/1.13/snakeyaml-1.13.jar:/home/mingtzha/.m2/repository/org/apache/kafka/kafka_2.10/0.8.0/kafka_2.10-0.8.0.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-library/2.10.1/scala-library-2.10.1.jar:/home/mingtzha/.m2/repository/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-compiler/2.10.1/scala-compiler-2.10.1.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-reflect/2.10.1/scala-reflect-2.10.1.jar:/home/mingtzha/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/mingtzha/.m2/repository/org/xerial/snappy/snappy-java/
>
> 1.0.4.1/snappy-java-1.0.4.1.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-annotation/2.2.0/metrics-annotation-2.2.0.jar:/home/mingtzha/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/itest-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-model/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-api/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-data/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-avro/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-dev/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-shared/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-spi/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-common/target/classes:/home/mingtzha/.m2/repository/com/googlecode/owasp-java-html-sanitizer/owasp-java-html-sanitizer/r209/owasp-java-html-sanitizer-r209.jar:/home/mingtzha/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/mingtzha/.m2/repository/com/fasterxml/uuid/java-uuid-generator/3.1.3/java-uuid-generator-3.1.3.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-hbase/target/classes:/home/mingtzha/.m2/repository/org/apache/avro/avro/1.7.5/avro-1.7.5.jar:/home/mingtzha/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/mingtzha/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.15/hbase-0.94.15.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.21/hbase-0.94.21-tests.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-core/2.1.2/metrics-core-2.1.2.jar:/home/mingtzha/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/mingtzha/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/mingtzha/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/mingtzha/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/mingtzha/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/mingtzha/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/mingtzha/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/mingtzha/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/mingtzha/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/mingtzha/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/mingtzha/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/mingtzha/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/mingtzha/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/home/mingtzha/.m2/repository/org/apache/thrift/libthrift/0.8.0/libthrift-0.8.0.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpclient/4.1.2/httpclient-4.1.2.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar:/home/mingtzha/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.8/jackson-xc-1.8.8.jar:/home/mingtzha/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/mingtzha/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/mingtzha/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/mingtzha/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-core/1.8/jersey-core-1.8.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-json/1.8/jersey-json-1.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/mingtzha/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar:/home/mingtzha/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/mingtzha/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/mingtzha/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/mingtzha/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar:/home/mingtzha/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/mingtzha/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/mingtzha/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/mingtzha/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/mingtzha/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/mingtzha/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/mingtzha/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-test/1.2.1/hadoop-test-1.2.1.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-ext/1.7.5/slf4j-ext-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/cal10n/cal10n-api/0.7.4/cal10n-api-0.7.4.jar:/home/mingtzha/.m2/repository/org/slf4j/jcl-over-slf4j/1.7.5/jcl-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/log4j-over-slf4j/1.7.5/log4j-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/jul-to-slf4j/1.7.5/jul-to-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-classic/1.0.13/logback-classic-1.0.13.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-core/1.0.13/logback-core-1.0.13.jar:/home/mingtzha/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/mingtzha/.m2/repository/org/fusesource/jansi/jansi/1.11/jansi-1.11.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/config-zookeeper/target/classes:/home/mingtzha/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/mingtzha/.m2/repository/joda-time/joda-time/2.3/joda-time-2.3.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-locator/2.3.0-b01/hk2-locator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/javax.inject/2.3.0-b01/javax.inject-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/aopalliance-repackaged/2.3.0-b01/aopalliance-repackaged-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-api/2.3.0-b01/hk2-api-2.3.0-b01.jar:/home/mingtzha/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-utils/2.3.0-b01/hk2-utils-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.io.tmpdir=/tmp
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.compiler=<NA>
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> os.name=Linux
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:os.arch=amd64
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:os.version=2.6.39-300.20.1.el6uek.x86_64
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> user.name=mingtzha
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:user.home=/home/mingtzha
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>
> environment:user.dir=/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest
> 09:42:33.380 [main] [39mDEBUG [0;39m
> [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Opening
>
> datadir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
>
> snapDir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
> 09:42:33.394 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Created server with
> tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir
>
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> snapdir
>
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> 09:42:33.400 [main] [34mINFO [0;39m [1;35mo.a.z.server.NIOServerCnxnFactory
> [0;39m - binding to port 0.0.0.0/0.0.0.0:51126
> 09:42:33.405 [main] [34mINFO [0;39m
> [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Snapshotting: 0x0 to
>
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2/snapshot.0
> 09:42:33.431 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO [0;39m
> [1;35mo.a.z.server.NIOServerCnxnFactory [0;39m - Accepted socket connection
> from /10.241.3.35:44625
> 09:42:33.437 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Processing stat command
> from /10.241.3.35:44625
> 09:42:33.442 [Thread-25] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Stat command output
> 09:42:33.442 [Thread-25] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Closed socket connection
> for client /10.241.3.35:44625 (no session established for client)
> 09:42:33.442 [main] [34mINFO [0;39m [1;35mo.a.h.h.z.MiniZooKeeperCluster
> [0;39m - Started MiniZK Cluster and connect 1 ZK server on client port:
> 51126
> 09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase: masked=rwxr-xr-x
> 09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #3
> 09:42:33.444 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #3
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 41118:
> has #3 from 10.241.3.35:24701
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.mkdirs:
> /user/mingtzha/hbase
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* mkdirs:
> /user/mingtzha/hbase
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
> created directory /user
> 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
> created directory /user/mingtzha
> 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
> created directory /user/mingtzha/hbase
> 09:42:33.448 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> bytes at the end of the edit log (offset 4)
> 09:42:33.452 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> bytes at the end of the edit log (offset 4)
> 09:42:33.455 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=mkdirs    src=/user/mingtzha/hbase    dst=null
> perm=mingtzha:supergroup:rwxr-xr-x
> 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: mkdirs queueTime= 0
> procesingTime= 10
> 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #3 from 10.241.3.35:24701
> 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #3 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.455 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #3
> 09:42:33.455 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: mkdirs 12
> 09:42:33.461 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.468 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.469 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #4
> 09:42:33.469 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #4
> 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118:
> has #4 from 10.241.3.35:24701
> 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.479 [IPC Server handler 2 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.480 [IPC Server handler 2 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.480 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #4 from 10.241.3.35:24701
> 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #4 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.482 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #4
> 09:42:33.482 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> retrying: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 09:42:33.483 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #5
> 09:42:33.483 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #5
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 41118:
> has #5 from 10.241.3.35:24701
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /user/mingtzha/hbase/hbase.version because it does not exist
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 1
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #5 from 10.241.3.35:24701
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #5 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.484 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #5
> 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: delete 1
> 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.485 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #6
> 09:42:33.485 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #6
> 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118:
> has #6 from 10.241.3.35:24701
> 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.486 [IPC Server handler 6 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.486 [IPC Server handler 6 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.486 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #6 from 10.241.3.35:24701
> 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #6 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.487 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #6
> 09:42:33.487 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> retrying: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 09:42:33.487 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #7
> 09:42:33.488 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #7
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 41118:
> has #7 from 10.241.3.35:24701
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /user/mingtzha/hbase/hbase.version because it does not exist
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 0
> 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #7 from 10.241.3.35:24701
> 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #7 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.489 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #7
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: delete 2
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #8
> 09:42:33.489 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #8
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118:
> has #8 from 10.241.3.35:24701
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.491 [IPC Server handler 8 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.491 [IPC Server handler 8 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.491 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #8 from 10.241.3.35:24701
> 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #8 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.492 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #8
> 09:42:33.492 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> retrying: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 09:42:33.492 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #9
> 09:42:33.493 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #9
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 41118:
> has #9 from 10.241.3.35:24701
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /user/mingtzha/hbase/hbase.version because it does not exist
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 0
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #9 from 10.241.3.35:24701
> 09:42:33.494 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #9 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.494 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #9
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: delete 2
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #10
> 09:42:33.494 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #10
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
> has #10 from 10.241.3.35:24701
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.495 [IPC Server handler 0 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.495 [IPC Server handler 0 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.496 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #10 from 10.241.3.35:24701
> 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #10 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.497 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #10
> 09:42:33.497 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> HBaseTestSample.setup
> 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
> HBaseTestSample.testInsert
> 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> HBaseTestSample.testInsert
> FAILED CONFIGURATION: @BeforeMethod setup
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>     at com.sun.proxy.$Proxy10.create(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>     at com.sun.proxy.$Proxy10.create(Unknown Source)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3451)
>     at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:870)
>     at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:205)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
>     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
>     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:360)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:774)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:646)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:628)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:576)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:563)
>     at
>
> com.**.sites.analytics.repository.itest.endeca.HBaseTestSample.setup(HBaseTestSample.java:101)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
>
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
>     at
>
> org.testng.internal.MethodInvocationHelper$2.runConfigurationMethod(MethodInvocationHelper.java:292)
>     at
>
> org.jvnet.testing.hk2testng.HK2TestListenerAdapter.run(HK2TestListenerAdapter.java:97)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
>
> org.testng.internal.MethodInvocationHelper.invokeConfigurable(MethodInvocationHelper.java:304)
>     at
> org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:556)
>     at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
>     at org.testng.internal.Invoker.invokeMethod(Invoker.java:653)
>     at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
>     at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
>     at
>
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
>     at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
>     at org.testng.TestRunner.privateRun(TestRunner.java:767)
>     at org.testng.TestRunner.run(TestRunner.java:617)
>     at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
>     at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
>     at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
>     at org.testng.SuiteRunner.run(SuiteRunner.java:240)
>     at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
>     at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
>     at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
>     at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
>     at org.testng.TestNG.run(TestNG.java:1057)
>     at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
>     at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
>     at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
>
> SKIPPED CONFIGURATION: @AfterMethod destroy
> SKIPPED: testInsert
>
> ===============================================
>     Default test
>     Tests run: 1, Failures: 0, Skips: 1
>     Configuration Failures: 1, Skips: 1
> ===============================================
>
> 09:42:33.535 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
> [Default suite]
>
> ===============================================
> Default suite
> Total tests run: 1, Failures: 0, Skips: 1
> Configuration Failures: 1, Skips: 1
> ===============================================
>
> [TestNG] Time taken by org.testng.reporters.XMLReporter@71aeef97: 6 ms
> [TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 4 ms
> [TestNG] Time taken by org.testng.reporters.jq.Main@2b430201: 24 ms
> [TestNG] Time taken by org.testng.reporters.JUnitReportReporter@3309b429:
> 4
> ms
> [TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@7224eaaa: 8
> ms
> [TestNG] Time taken by org.testng.reporters.EmailableReporter2@53b74706: 3
> ms
> 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of FileSystem
> cache with 1 elements.
> 09:42:33.588 [Thread-0] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - Stopping client
> 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: closed
> 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: stopped, remaining
> connections 0
> 09:42:33.589 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> disconnecting client 10.241.3.35:24701. Number of active connections: 1
> 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
> hdfs://slc05muw.us.**.com:41118
> 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache
>
> The java code:
>
> import java.io.BufferedReader;
> import java.io.InputStreamReader;
>
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HBaseTestingUtility;
> import org.apache.hadoop.hbase.client.Delete;
> import org.apache.hadoop.hbase.client.Get;
> import org.apache.hadoop.hbase.client.HTable;
> import org.apache.hadoop.hbase.client.Put;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
> import org.testng.annotations.AfterMethod;
> import org.testng.annotations.BeforeMethod;
> import org.testng.annotations.Test;
>
> public class HBaseTestSample {
>
>     private static HBaseTestingUtility utility;
>     byte[] CF = "CF".getBytes();
>     byte[] QUALIFIER = "CQ-1".getBytes();
>
>     @BeforeMethod
>     public void setup() throws Exception {
>         Configuration hbaseConf = HBaseConfiguration.create();
>
>         utility = new HBaseTestingUtility(hbaseConf);
>
>         Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
>         BufferedReader br = new BufferedReader(new InputStreamReader(
>                 process.getInputStream()));
>         int rc = process.waitFor();
>         if (rc == 0) {
>             String umask = br.readLine();
>
>             int umaskBits = Integer.parseInt(umask, 8);
>             int permBits = 0777 & ~umaskBits;
>             String perms = Integer.toString(permBits, 8);
>
>             utility.getConfiguration().set("dfs.datanode.data.dir.perm",
> perms);
>         }
>
>         utility.startMiniCluster(0);
>
>     }
>
>     @Test
>     public void testInsert() throws Exception {
>         HTable table = utility.createTable(CF, QUALIFIER);
>
>         System.out.println("create table t-f");
>
>         // byte [] family, byte [] qualifier, byte [] value
>         table.put(new Put("r".getBytes()).add("f".getBytes(),
> "c1".getBytes(),
>                 "v".getBytes()));
>         Result result = table.get(new Get("r".getBytes()));
>
>         System.out.println(result.list().size());
>
>         table.delete(new Delete("r".getBytes()));
>
>         System.out.println("clean up");
>
>     }
>
>     @AfterMethod
>     public void destroy() throws Exception {
>         utility.cleanupTestDir();
>     }
> }
>
> hbase-site.xml:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration>
>     <property>
>         <name>hbase.rootdir</name>
>         <value>file:///scratch/mingtzha/hbase/test</value>
>     </property>
>     <property>
>         <name>hbase.tmp.dir</name>
>         <value>/tmp/hbase</value>
>     </property>
>
>     <property>
>         <name>hbase.zookeeper.quorum</name>
>         <value>localhost</value>
>     </property>
>     <property>
>         <name>hbase.cluster.distributed</name>
>         <value>true</value>
>     </property>
>     <property>
>         <name>hbase.ipc.warn.response.time</name>
>         <value>1</value>
>     </property>
>
>     <!-- http://hbase.apache.org/book/ops.monitoring.html -->
>     <!-- -1 => Disable logging by size -->
>     <!-- <property> -->
>     <!-- <name>hbase.ipc.warn.response.size</name> -->
>     <!-- <value>-1</value> -->
>     <!-- </property> -->
> </configuration>
>
> pom.xml
>
> <?xml version="1.0" encoding="UTF-8"?>
> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="
> http://www.w3.org/2001/XMLSchema-instance"
>     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
> http://maven.apache.org/xsd/maven-4.0.0.xsd">
>     <modelVersion>4.0.0</modelVersion>
>     <parent>
>         <groupId>com.**.sites.analytics.tests</groupId>
>         <artifactId>integration-test</artifactId>
>         <version>1.0-SNAPSHOT</version>
>     </parent>
>
>     <artifactId>repository-itest</artifactId>
>     <name>repository-itest</name>
>
>     <dependencies>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>test-integ</artifactId>
>             <version>${project.version}</version>
>             <scope>test</scope>
>         </dependency>
>         <dependency>
>             <groupId>com.**.sites.analytics.tests</groupId>
>             <artifactId>itest-core</artifactId>
>             <version>${project.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>config-dev</artifactId>
>             <version>${project.version}</version>
>             <scope>test</scope>
>         </dependency>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>repository-core</artifactId>
>             <version>${project.version}</version>
>         </dependency>
>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>repository-hbase</artifactId>
>             <version>${project.version}</version>
>         </dependency>
>
>         <dependency>
>             <groupId>org.apache.hbase</groupId>
>             <artifactId>hbase</artifactId>
>             <version>0.94.21</version>
>             <classifier>tests</classifier>
>             <exclusions>
>                 <exclusion>
>                     <artifactId>slf4j-log4j12</artifactId>
>                     <groupId>org.slf4j</groupId>
>                 </exclusion>
>             </exclusions>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.hadoop</groupId>
>             <artifactId>hadoop-core</artifactId>
>             <version>1.2.1</version>
>             <exclusions>
>                 <exclusion>
>                     <artifactId>slf4j-log4j12</artifactId>
>                     <groupId>org.slf4j</groupId>
>                 </exclusion>
>             </exclusions>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.hadoop</groupId>
>             <artifactId>hadoop-test</artifactId>
>             <version>1.2.1</version>
>             <exclusions>
>                 <exclusion>
>                     <artifactId>slf4j-log4j12</artifactId>
>                     <groupId>org.slf4j</groupId>
>                 </exclusion>
>             </exclusions>
>         </dependency>
>     </dependencies>
> </project>
>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Mingtao Zhang <ma...@gmail.com>.
Good to know. Thank you!

Mingtao Sent from iPhone

> On Aug 2, 2014, at 4:07 PM, Ted Yu <yu...@gmail.com> wrote:
> 
> If you use http://search-hadoop.com/, you would potentially find answers to
> the issue(s) you face.
> 
> Cheers
> 
> 
> On Sat, Aug 2, 2014 at 12:47 PM, Mingtao Zhang <ma...@gmail.com>
> wrote:
> 
>> Thank you! Ted and Matteo.
>> 
>> It works now.
>> 
>> Do we have somewhere documenting HBaseTestingUtilities?
>> 
>> Best Regards,
>> Mingtao
>> 
>> 
>> On Sat, Aug 2, 2014 at 12:05 PM, Mingtao Zhang <ma...@gmail.com>
>> wrote:
>> 
>>> oh ... does exclude guava and add a old dependency work, like 14.0.1? (I
>>> did this, but I am facing some other problem for now)
>>> 
>>> I don't think our build envrionment allows me to build hbase from source
>> :(
>>> 
>>> Mingtao
>>> 
>>> 
>>>> On Sat, Aug 2, 2014 at 12:00 PM, Ted Yu <yu...@gmail.com> wrote:
>>>> 
>>>> You don't have the fix.
>>>> 
>>>> You can apply the patch yourself and rebuild hbase.
>>>> 
>>>> Cheers
>>>> 
>>>> On Aug 2, 2014, at 11:41 AM, Mingtao Zhang <ma...@gmail.com>
>>>> wrote:
>>>> 
>>>>> Ted, thank you!
>>>>> 
>>>>> I made the change accordingly.
>>>>> 
>>>>> I am trying to start zookeeper/hbase cluster only using those (some
>>>> other
>>>>> codes ommited):
>>>>> 
>>>>>       MiniZooKeeperCluster zkCluster = new
>> MiniZooKeeperCluster(conf);
>>>>>       zkCluster.setDefaultClientPort(2555);
>>>>>       zkCluster.setTickTime(18000);
>>>>>       File zkDir = new File(utility.getClusterTestDir().toString());
>>>>>       zkCluster.startup(zkDir);
>>>>>       utility.setZkCluster(zkCluster);
>>>>>       utility.startMiniHBaseCluster(1, 1);
>>>>> 
>>>>> I could observe https://issues.apache.org/jira/browse/HBASE-10174
>>>>> 
>>>>> I am in a maven environment with 0.94.15, do I have the patch or not?
>>>>> 
>>>>> 
>>>>> Best Regards,
>>>>> Mingtao
>>>>> 
>>>>> 
>>>>>> On Sat, Aug 2, 2014 at 11:02 AM, Ted Yu <yu...@gmail.com> wrote:
>>>>>> 
>>>>>> In your config, I see:
>>>>>>   <property>
>>>>>>       <name>hbase.rootdir</name>
>>>>>>       <value>file:///scratch/mingtzha/hbase/test</value>
>>>>>>   </property>
>>>>>>   <property>
>>>>>>       <name>hbase.cluster.distributed</name>
>>>>>>       <value>true</value>
>>>>>>   </property>
>>>>>> The default value for hbase.cluster.distributed is false (for
>>>> standalone
>>>>>> mode).
>>>>>> 
>>>>>> Since your code is for test, you should keep
>> hbase.cluster.distributed
>>>> as
>>>>>> false.
>>>>>> 
>>>>>> Cheers
>>>>>> 
>>>>>> 
>>>>>> On Sat, Aug 2, 2014 at 9:51 AM, Mingtao Zhang <
>> mail2mingtao@gmail.com>
>>>>>> wrote:
>>>>>> 
>>>>>>> HI,
>>>>>>> 
>>>>>>> I am really stuck with this. Putting the stack trace, java file,
>>>>>> hbase-site
>>>>>>> file and pom file here.
>>>>>>> 
>>>>>>> I have 0 knowledge about hadoop and expecting it's transparent for
>> my
>>>>>>> integration test :(.
>>>>>>> 
>>>>>>> Thanks in advance!
>>>>>>> 
>>>>>>> Best Regards,
>>>>>>> Mingtao
>>>>>>> 
>>>>>>> The stack trace:
>>>>>>> 
>>>>>>> 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
>>>>>>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
>>>>>>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library
>> 1.2//EN,
>>>>>>> http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
>>>>>>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>> Can't
>>>>>>> exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
>>>>>>> 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd
>>>> -->
>> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
>>>>>>> 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
>>>>>>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> Container Server@9f51be6 +
>>>>>>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
>>>>>>> sessionIdManager
>>>>>>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>> Init
>>>>>>> SecureRandom.
>>>>>>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
>>>>>>> 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.servlet.HashSessionManager@738f651f
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> filterNameMap={safety=safety, krb5Filter=krb5Filter}
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> pathFilters=[(F=safety,[/*],[],15)]
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> servletFilterMap=null
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
>>>>>>> /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp,
>> *.jspx=jsp,
>>>>>>> /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default,
>> /fsck=fsck,
>>>>>>> /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp,
>>>> /data/*=data,
>>>>>>> /contentSummary/*=contentSummary,
>>>>>>> /renewDelegationToken=renewDelegationToken,
>>>>>>> /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
>>>>>>> *.JSP=jsp, /jmx=jmx}
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp,
>>>> jmx=jmx,
>>>>>>> data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
>>>>>>> cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
>>>>>>> default=default, logLevel=logLevel, contentSummary=contentSummary,
>>>>>>> getimage=getimage, renewDelegationToken=renewDelegationToken}
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting ServletHandler@3fd5e2ae
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> ServletHandler@3fd5e2ae
>>>>>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting SecurityHandler@51f35aea
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> SecurityHandler@51f35aea
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting SessionHandler@73152e3f
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> SessionHandler@73152e3f
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting ErrorPageErrorHandler@4b38117e
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> ErrorPageErrorHandler@4b38117e
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> loaded
>>>>>>> class
>>>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>>>>>>> from sun.misc.Launcher$AppClassLoader@23137792
>>>>>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class
>>>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>>>>>>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> krb5Filter
>>>>>>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> loaded
>>>>>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
>>>>>>> sun.misc.Launcher$AppClassLoader@23137792
>>>>>>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>>>>>>> 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> safety
>>>>>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> conf
>>>>>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> cancelDelegationToken
>>>>>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> contentSummary
>>>>>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> checksum
>>>>>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> data
>>>>>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> fsck
>>>>>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> getDelegationToken
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> getimage
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> listPaths
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> renewDelegationToken
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> stacks
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> jmx
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> logLevel
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> loaded
>>>>>>> class org.apache.jasper.servlet.JspServlet from
>>>>>>> sun.misc.Launcher$AppClassLoader@23137792
>>>>>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class org.apache.jasper.servlet.JspServlet
>>>>>>> 09:42:33.250 [main] [39mDEBUG [0;39m
>>>>>> [1;35mo.a.j.compiler.JspRuntimeContext
>>>>>>> [0;39m - PWC5965: Parent class loader is:
>> ContextLoader@WepAppsContext
>>>>>> ([])
>>>>>>> / sun.misc.Launcher$AppClassLoader@23137792
>>>>>>> 09:42:33.252 [main] [39mDEBUG [0;39m
>>>>>>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch
>>>> dir
>>>>>>> for the JSP engine is:
>>>>>>> /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
>>>>>>> 09:42:33.252 [main] [39mDEBUG [0;39m
>>>>>>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966:
>>>> IMPORTANT:
>>>>>> Do
>>>>>>> not modify the generated servlets
>>>>>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> jsp
>>>>>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> loaded
>>>>>>> class org.mortbay.jetty.servlet.DefaultServlet
>>>>>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> loaded
>>>>>>> class org.mortbay.jetty.servlet.DefaultServlet from
>>>>>>> sun.misc.Launcher$AppClassLoader@23137792
>>>>>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class org.mortbay.jetty.servlet.DefaultServlet
>>>>>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
>>>>>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.ResourceCache@5b525b5f
>>>>>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> resource base =
>>>>>>> file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
>>>>>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> default
>>>>>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>>>>>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> Container org.mortbay.jetty.servlet.Context@4e048dc6
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>>>>>>> + ErrorHandler@7bece8cf as errorHandler
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> filterNameMap={safety=safety}
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> pathFilters=[(F=safety,[/*],[],15)]
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> servletFilterMap=null
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting ServletHandler@cf7ea2e
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> ServletHandler@cf7ea2e
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting org.mortbay.jetty.servlet.Context@4e048dc6
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting ErrorHandler@7bece8cf
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> ErrorHandler@7bece8cf
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> safety
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class org.apache.hadoop.http.AdminAuthorizedServlet
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.servlet.Context@4e048dc6
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> Container org.mortbay.jetty.servlet.Context@6e4f7806
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>>>>>>> + ErrorHandler@7ea8ad98 as errorHandler
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> filterNameMap={safety=safety}
>>>>>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> pathFilters=[(F=safety,[/*],[],15)]
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> servletFilterMap=null
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>> servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting ServletHandler@23510a7e
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> ServletHandler@23510a7e
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting org.mortbay.jetty.servlet.Context@6e4f7806
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting ErrorHandler@7ea8ad98
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> ErrorHandler@7ea8ad98
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> safety
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> Holding
>>>>>>> class org.mortbay.jetty.servlet.DefaultServlet
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.servlet.DefaultServlet-1788226358
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> org.mortbay.jetty.servlet.Context@6e4f7806
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting ContextHandlerCollection@5a4950dd
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> ContextHandlerCollection@5a4950dd
>>>>>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>>> starting Server@9f51be6
>>>>>>> 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>>>>> started
>>>>>>> or
>> 

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Ted Yu <yu...@gmail.com>.
If you use http://search-hadoop.com/, you would potentially find answers to
the issue(s) you face.

Cheers


On Sat, Aug 2, 2014 at 12:47 PM, Mingtao Zhang <ma...@gmail.com>
wrote:

> Thank you! Ted and Matteo.
>
> It works now.
>
> Do we have somewhere documenting HBaseTestingUtilities?
>
> Best Regards,
> Mingtao
>
>
> On Sat, Aug 2, 2014 at 12:05 PM, Mingtao Zhang <ma...@gmail.com>
> wrote:
>
> > oh ... does exclude guava and add a old dependency work, like 14.0.1? (I
> > did this, but I am facing some other problem for now)
> >
> > I don't think our build envrionment allows me to build hbase from source
> :(
> >
> > Mingtao
> >
> >
> > On Sat, Aug 2, 2014 at 12:00 PM, Ted Yu <yu...@gmail.com> wrote:
> >
> >> You don't have the fix.
> >>
> >> You can apply the patch yourself and rebuild hbase.
> >>
> >> Cheers
> >>
> >> On Aug 2, 2014, at 11:41 AM, Mingtao Zhang <ma...@gmail.com>
> >> wrote:
> >>
> >> > Ted, thank you!
> >> >
> >> > I made the change accordingly.
> >> >
> >> > I am trying to start zookeeper/hbase cluster only using those (some
> >> other
> >> > codes ommited):
> >> >
> >> >        MiniZooKeeperCluster zkCluster = new
> MiniZooKeeperCluster(conf);
> >> >        zkCluster.setDefaultClientPort(2555);
> >> >        zkCluster.setTickTime(18000);
> >> >        File zkDir = new File(utility.getClusterTestDir().toString());
> >> >        zkCluster.startup(zkDir);
> >> >        utility.setZkCluster(zkCluster);
> >> >        utility.startMiniHBaseCluster(1, 1);
> >> >
> >> > I could observe https://issues.apache.org/jira/browse/HBASE-10174
> >> >
> >> > I am in a maven environment with 0.94.15, do I have the patch or not?
> >> >
> >> >
> >> > Best Regards,
> >> > Mingtao
> >> >
> >> >
> >> > On Sat, Aug 2, 2014 at 11:02 AM, Ted Yu <yu...@gmail.com> wrote:
> >> >
> >> >> In your config, I see:
> >> >>    <property>
> >> >>        <name>hbase.rootdir</name>
> >> >>        <value>file:///scratch/mingtzha/hbase/test</value>
> >> >>    </property>
> >> >>    <property>
> >> >>        <name>hbase.cluster.distributed</name>
> >> >>        <value>true</value>
> >> >>    </property>
> >> >> The default value for hbase.cluster.distributed is false (for
> >> standalone
> >> >> mode).
> >> >>
> >> >> Since your code is for test, you should keep
> hbase.cluster.distributed
> >> as
> >> >> false.
> >> >>
> >> >> Cheers
> >> >>
> >> >>
> >> >> On Sat, Aug 2, 2014 at 9:51 AM, Mingtao Zhang <
> mail2mingtao@gmail.com>
> >> >> wrote:
> >> >>
> >> >>> HI,
> >> >>>
> >> >>> I am really stuck with this. Putting the stack trace, java file,
> >> >> hbase-site
> >> >>> file and pom file here.
> >> >>>
> >> >>> I have 0 knowledge about hadoop and expecting it's transparent for
> my
> >> >>> integration test :(.
> >> >>>
> >> >>> Thanks in advance!
> >> >>>
> >> >>> Best Regards,
> >> >>> Mingtao
> >> >>>
> >> >>> The stack trace:
> >> >>>
> >> >>> 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>
> >>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
> >> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>
> >>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
> >> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library
> 1.2//EN,
> >> >>> http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
> >> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Can't
> >> >>> exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
> >> >>> 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd
> >> -->
> >> >>
> >>
> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
> >> >>> 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>
> >>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
> >> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> Container Server@9f51be6 +
> >> >>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
> >> >>> sessionIdManager
> >> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Init
> >> >>> SecureRandom.
> >> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
> >> >>> 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.servlet.HashSessionManager@738f651f
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> filterNameMap={safety=safety, krb5Filter=krb5Filter}
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> pathFilters=[(F=safety,[/*],[],15)]
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> servletFilterMap=null
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
> >> >>> /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp,
> *.jspx=jsp,
> >> >>> /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default,
> /fsck=fsck,
> >> >>> /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp,
> >> /data/*=data,
> >> >>> /contentSummary/*=contentSummary,
> >> >>> /renewDelegationToken=renewDelegationToken,
> >> >>> /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
> >> >>> *.JSP=jsp, /jmx=jmx}
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp,
> >> jmx=jmx,
> >> >>> data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
> >> >>> cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
> >> >>> default=default, logLevel=logLevel, contentSummary=contentSummary,
> >> >>> getimage=getimage, renewDelegationToken=renewDelegationToken}
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting ServletHandler@3fd5e2ae
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> ServletHandler@3fd5e2ae
> >> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting SecurityHandler@51f35aea
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> SecurityHandler@51f35aea
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting SessionHandler@73152e3f
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> SessionHandler@73152e3f
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >> >>
> >>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting ErrorPageErrorHandler@4b38117e
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> ErrorPageErrorHandler@4b38117e
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> loaded
> >> >>> class
> >> >>>
> >> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> >> >>> from sun.misc.Launcher$AppClassLoader@23137792
> >> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class
> >> >>>
> >> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> >> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> krb5Filter
> >> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> loaded
> >> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
> >> >>> sun.misc.Launcher$AppClassLoader@23137792
> >> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> >> >>> 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> safety
> >> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> conf
> >> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> cancelDelegationToken
> >> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> contentSummary
> >> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> checksum
> >> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> data
> >> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> fsck
> >> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> getDelegationToken
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> getimage
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> listPaths
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> renewDelegationToken
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> stacks
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> jmx
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> logLevel
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> loaded
> >> >>> class org.apache.jasper.servlet.JspServlet from
> >> >>> sun.misc.Launcher$AppClassLoader@23137792
> >> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class org.apache.jasper.servlet.JspServlet
> >> >>> 09:42:33.250 [main] [39mDEBUG [0;39m
> >> >> [1;35mo.a.j.compiler.JspRuntimeContext
> >> >>> [0;39m - PWC5965: Parent class loader is:
> ContextLoader@WepAppsContext
> >> >> ([])
> >> >>> / sun.misc.Launcher$AppClassLoader@23137792
> >> >>> 09:42:33.252 [main] [39mDEBUG [0;39m
> >> >>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch
> >> dir
> >> >>> for the JSP engine is:
> >> >>> /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
> >> >>> 09:42:33.252 [main] [39mDEBUG [0;39m
> >> >>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966:
> >> IMPORTANT:
> >> >> Do
> >> >>> not modify the generated servlets
> >> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> jsp
> >> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> loaded
> >> >>> class org.mortbay.jetty.servlet.DefaultServlet
> >> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> loaded
> >> >>> class org.mortbay.jetty.servlet.DefaultServlet from
> >> >>> sun.misc.Launcher$AppClassLoader@23137792
> >> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class org.mortbay.jetty.servlet.DefaultServlet
> >> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
> >> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.ResourceCache@5b525b5f
> >> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> resource base =
> >> >>> file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
> >> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> default
> >> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >> >>
> >>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> >> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> Container org.mortbay.jetty.servlet.Context@4e048dc6
> >> >>
> >>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> >> >>> + ErrorHandler@7bece8cf as errorHandler
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> filterNameMap={safety=safety}
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> pathFilters=[(F=safety,[/*],[],15)]
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> servletFilterMap=null
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>
> >>
> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>
> >>
> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting ServletHandler@cf7ea2e
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> ServletHandler@cf7ea2e
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting org.mortbay.jetty.servlet.Context@4e048dc6
> >> >>
> >>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting ErrorHandler@7bece8cf
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> ErrorHandler@7bece8cf
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> safety
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class org.apache.hadoop.http.AdminAuthorizedServlet
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.servlet.Context@4e048dc6
> >> >>
> >>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> Container org.mortbay.jetty.servlet.Context@6e4f7806
> >> >>
> >>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> >> >>> + ErrorHandler@7ea8ad98 as errorHandler
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> filterNameMap={safety=safety}
> >> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> pathFilters=[(F=safety,[/*],[],15)]
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> servletFilterMap=null
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>>
> >> servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>
> >>
> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting ServletHandler@23510a7e
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> ServletHandler@23510a7e
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting org.mortbay.jetty.servlet.Context@6e4f7806
> >> >>
> >>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting ErrorHandler@7ea8ad98
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> ErrorHandler@7ea8ad98
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> safety
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> Holding
> >> >>> class org.mortbay.jetty.servlet.DefaultServlet
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.servlet.DefaultServlet-1788226358
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> org.mortbay.jetty.servlet.Context@6e4f7806
> >> >>
> >>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting ContextHandlerCollection@5a4950dd
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> ContextHandlerCollection@5a4950dd
> >> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >>> starting Server@9f51be6
> >> >>> 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> >> started
> >> >>> or
> >>
> >
> >
>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Mingtao Zhang <ma...@gmail.com>.
Thank you! Ted and Matteo.

It works now.

Do we have somewhere documenting HBaseTestingUtilities?

Best Regards,
Mingtao


On Sat, Aug 2, 2014 at 12:05 PM, Mingtao Zhang <ma...@gmail.com>
wrote:

> oh ... does exclude guava and add a old dependency work, like 14.0.1? (I
> did this, but I am facing some other problem for now)
>
> I don't think our build envrionment allows me to build hbase from source :(
>
> Mingtao
>
>
> On Sat, Aug 2, 2014 at 12:00 PM, Ted Yu <yu...@gmail.com> wrote:
>
>> You don't have the fix.
>>
>> You can apply the patch yourself and rebuild hbase.
>>
>> Cheers
>>
>> On Aug 2, 2014, at 11:41 AM, Mingtao Zhang <ma...@gmail.com>
>> wrote:
>>
>> > Ted, thank you!
>> >
>> > I made the change accordingly.
>> >
>> > I am trying to start zookeeper/hbase cluster only using those (some
>> other
>> > codes ommited):
>> >
>> >        MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf);
>> >        zkCluster.setDefaultClientPort(2555);
>> >        zkCluster.setTickTime(18000);
>> >        File zkDir = new File(utility.getClusterTestDir().toString());
>> >        zkCluster.startup(zkDir);
>> >        utility.setZkCluster(zkCluster);
>> >        utility.startMiniHBaseCluster(1, 1);
>> >
>> > I could observe https://issues.apache.org/jira/browse/HBASE-10174
>> >
>> > I am in a maven environment with 0.94.15, do I have the patch or not?
>> >
>> >
>> > Best Regards,
>> > Mingtao
>> >
>> >
>> > On Sat, Aug 2, 2014 at 11:02 AM, Ted Yu <yu...@gmail.com> wrote:
>> >
>> >> In your config, I see:
>> >>    <property>
>> >>        <name>hbase.rootdir</name>
>> >>        <value>file:///scratch/mingtzha/hbase/test</value>
>> >>    </property>
>> >>    <property>
>> >>        <name>hbase.cluster.distributed</name>
>> >>        <value>true</value>
>> >>    </property>
>> >> The default value for hbase.cluster.distributed is false (for
>> standalone
>> >> mode).
>> >>
>> >> Since your code is for test, you should keep hbase.cluster.distributed
>> as
>> >> false.
>> >>
>> >> Cheers
>> >>
>> >>
>> >> On Sat, Aug 2, 2014 at 9:51 AM, Mingtao Zhang <ma...@gmail.com>
>> >> wrote:
>> >>
>> >>> HI,
>> >>>
>> >>> I am really stuck with this. Putting the stack trace, java file,
>> >> hbase-site
>> >>> file and pom file here.
>> >>>
>> >>> I have 0 knowledge about hadoop and expecting it's transparent for my
>> >>> integration test :(.
>> >>>
>> >>> Thanks in advance!
>> >>>
>> >>> Best Regards,
>> >>> Mingtao
>> >>>
>> >>> The stack trace:
>> >>>
>> >>> 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
>> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
>> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
>> >>> http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
>> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Can't
>> >>> exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
>> >>> 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd
>> -->
>> >>
>> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
>> >>> 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
>> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> Container Server@9f51be6 +
>> >>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
>> >>> sessionIdManager
>> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Init
>> >>> SecureRandom.
>> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
>> >>> 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.servlet.HashSessionManager@738f651f
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> filterNameMap={safety=safety, krb5Filter=krb5Filter}
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> pathFilters=[(F=safety,[/*],[],15)]
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> servletFilterMap=null
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
>> >>> /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
>> >>> /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
>> >>> /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp,
>> /data/*=data,
>> >>> /contentSummary/*=contentSummary,
>> >>> /renewDelegationToken=renewDelegationToken,
>> >>> /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
>> >>> *.JSP=jsp, /jmx=jmx}
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp,
>> jmx=jmx,
>> >>> data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
>> >>> cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
>> >>> default=default, logLevel=logLevel, contentSummary=contentSummary,
>> >>> getimage=getimage, renewDelegationToken=renewDelegationToken}
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting ServletHandler@3fd5e2ae
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> ServletHandler@3fd5e2ae
>> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting SecurityHandler@51f35aea
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> SecurityHandler@51f35aea
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting SessionHandler@73152e3f
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> SessionHandler@73152e3f
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> >>
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting ErrorPageErrorHandler@4b38117e
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> ErrorPageErrorHandler@4b38117e
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> loaded
>> >>> class
>> >>>
>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>> >>> from sun.misc.Launcher$AppClassLoader@23137792
>> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class
>> >>>
>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> krb5Filter
>> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> loaded
>> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
>> >>> sun.misc.Launcher$AppClassLoader@23137792
>> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>> >>> 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> safety
>> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> conf
>> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> cancelDelegationToken
>> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> contentSummary
>> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> checksum
>> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> data
>> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> fsck
>> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> getDelegationToken
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> getimage
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> listPaths
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> renewDelegationToken
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> stacks
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> jmx
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> logLevel
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> loaded
>> >>> class org.apache.jasper.servlet.JspServlet from
>> >>> sun.misc.Launcher$AppClassLoader@23137792
>> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class org.apache.jasper.servlet.JspServlet
>> >>> 09:42:33.250 [main] [39mDEBUG [0;39m
>> >> [1;35mo.a.j.compiler.JspRuntimeContext
>> >>> [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext
>> >> ([])
>> >>> / sun.misc.Launcher$AppClassLoader@23137792
>> >>> 09:42:33.252 [main] [39mDEBUG [0;39m
>> >>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch
>> dir
>> >>> for the JSP engine is:
>> >>> /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
>> >>> 09:42:33.252 [main] [39mDEBUG [0;39m
>> >>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966:
>> IMPORTANT:
>> >> Do
>> >>> not modify the generated servlets
>> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> jsp
>> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> loaded
>> >>> class org.mortbay.jetty.servlet.DefaultServlet
>> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> loaded
>> >>> class org.mortbay.jetty.servlet.DefaultServlet from
>> >>> sun.misc.Launcher$AppClassLoader@23137792
>> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class org.mortbay.jetty.servlet.DefaultServlet
>> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
>> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.ResourceCache@5b525b5f
>> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> resource base =
>> >>> file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
>> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> default
>> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> >>
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> Container org.mortbay.jetty.servlet.Context@4e048dc6
>> >>
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>> >>> + ErrorHandler@7bece8cf as errorHandler
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> filterNameMap={safety=safety}
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> pathFilters=[(F=safety,[/*],[],15)]
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> servletFilterMap=null
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>
>> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>
>> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting ServletHandler@cf7ea2e
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> ServletHandler@cf7ea2e
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting org.mortbay.jetty.servlet.Context@4e048dc6
>> >>
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting ErrorHandler@7bece8cf
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> ErrorHandler@7bece8cf
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> safety
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class org.apache.hadoop.http.AdminAuthorizedServlet
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.servlet.Context@4e048dc6
>> >>
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> Container org.mortbay.jetty.servlet.Context@6e4f7806
>> >>
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>> >>> + ErrorHandler@7ea8ad98 as errorHandler
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> filterNameMap={safety=safety}
>> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> pathFilters=[(F=safety,[/*],[],15)]
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> servletFilterMap=null
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>>
>> servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>
>> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting ServletHandler@23510a7e
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> ServletHandler@23510a7e
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting org.mortbay.jetty.servlet.Context@6e4f7806
>> >>
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting ErrorHandler@7ea8ad98
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> ErrorHandler@7ea8ad98
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> safety
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> Holding
>> >>> class org.mortbay.jetty.servlet.DefaultServlet
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.servlet.DefaultServlet-1788226358
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> org.mortbay.jetty.servlet.Context@6e4f7806
>> >>
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting ContextHandlerCollection@5a4950dd
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> ContextHandlerCollection@5a4950dd
>> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >>> starting Server@9f51be6
>> >>> 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> >> started
>> >>> or
>>
>
>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Mingtao Zhang <ma...@gmail.com>.
oh ... does exclude guava and add a old dependency work, like 14.0.1? (I
did this, but I am facing some other problem for now)

I don't think our build envrionment allows me to build hbase from source :(

Mingtao


On Sat, Aug 2, 2014 at 12:00 PM, Ted Yu <yu...@gmail.com> wrote:

> You don't have the fix.
>
> You can apply the patch yourself and rebuild hbase.
>
> Cheers
>
> On Aug 2, 2014, at 11:41 AM, Mingtao Zhang <ma...@gmail.com> wrote:
>
> > Ted, thank you!
> >
> > I made the change accordingly.
> >
> > I am trying to start zookeeper/hbase cluster only using those (some other
> > codes ommited):
> >
> >        MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf);
> >        zkCluster.setDefaultClientPort(2555);
> >        zkCluster.setTickTime(18000);
> >        File zkDir = new File(utility.getClusterTestDir().toString());
> >        zkCluster.startup(zkDir);
> >        utility.setZkCluster(zkCluster);
> >        utility.startMiniHBaseCluster(1, 1);
> >
> > I could observe https://issues.apache.org/jira/browse/HBASE-10174
> >
> > I am in a maven environment with 0.94.15, do I have the patch or not?
> >
> >
> > Best Regards,
> > Mingtao
> >
> >
> > On Sat, Aug 2, 2014 at 11:02 AM, Ted Yu <yu...@gmail.com> wrote:
> >
> >> In your config, I see:
> >>    <property>
> >>        <name>hbase.rootdir</name>
> >>        <value>file:///scratch/mingtzha/hbase/test</value>
> >>    </property>
> >>    <property>
> >>        <name>hbase.cluster.distributed</name>
> >>        <value>true</value>
> >>    </property>
> >> The default value for hbase.cluster.distributed is false (for standalone
> >> mode).
> >>
> >> Since your code is for test, you should keep hbase.cluster.distributed
> as
> >> false.
> >>
> >> Cheers
> >>
> >>
> >> On Sat, Aug 2, 2014 at 9:51 AM, Mingtao Zhang <ma...@gmail.com>
> >> wrote:
> >>
> >>> HI,
> >>>
> >>> I am really stuck with this. Putting the stack trace, java file,
> >> hbase-site
> >>> file and pom file here.
> >>>
> >>> I have 0 knowledge about hadoop and expecting it's transparent for my
> >>> integration test :(.
> >>>
> >>> Thanks in advance!
> >>>
> >>> Best Regards,
> >>> Mingtao
> >>>
> >>> The stack trace:
> >>>
> >>> 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
> >>> http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
> >>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Can't
> >>> exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
> >>> 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd
> -->
> >>
> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
> >>> 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> Container Server@9f51be6 +
> >>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
> >>> sessionIdManager
> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Init
> >>> SecureRandom.
> >>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
> >>> 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.servlet.HashSessionManager@738f651f
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> filterNameMap={safety=safety, krb5Filter=krb5Filter}
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> pathFilters=[(F=safety,[/*],[],15)]
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> servletFilterMap=null
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
> >>> /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
> >>> /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
> >>> /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp,
> /data/*=data,
> >>> /contentSummary/*=contentSummary,
> >>> /renewDelegationToken=renewDelegationToken,
> >>> /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
> >>> *.JSP=jsp, /jmx=jmx}
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp,
> jmx=jmx,
> >>> data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
> >>> cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
> >>> default=default, logLevel=logLevel, contentSummary=contentSummary,
> >>> getimage=getimage, renewDelegationToken=renewDelegationToken}
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting ServletHandler@3fd5e2ae
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> ServletHandler@3fd5e2ae
> >>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting SecurityHandler@51f35aea
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> SecurityHandler@51f35aea
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting SessionHandler@73152e3f
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> SessionHandler@73152e3f
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting ErrorPageErrorHandler@4b38117e
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> ErrorPageErrorHandler@4b38117e
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> loaded
> >>> class
> >>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> >>> from sun.misc.Launcher$AppClassLoader@23137792
> >>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class
> >>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> krb5Filter
> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> loaded
> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
> >>> sun.misc.Launcher$AppClassLoader@23137792
> >>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> >>> 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> safety
> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> conf
> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> cancelDelegationToken
> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> contentSummary
> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> checksum
> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> data
> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> fsck
> >>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> getDelegationToken
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> getimage
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> listPaths
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> renewDelegationToken
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> stacks
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> jmx
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> logLevel
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> loaded
> >>> class org.apache.jasper.servlet.JspServlet from
> >>> sun.misc.Launcher$AppClassLoader@23137792
> >>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class org.apache.jasper.servlet.JspServlet
> >>> 09:42:33.250 [main] [39mDEBUG [0;39m
> >> [1;35mo.a.j.compiler.JspRuntimeContext
> >>> [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext
> >> ([])
> >>> / sun.misc.Launcher$AppClassLoader@23137792
> >>> 09:42:33.252 [main] [39mDEBUG [0;39m
> >>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch
> dir
> >>> for the JSP engine is:
> >>> /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
> >>> 09:42:33.252 [main] [39mDEBUG [0;39m
> >>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT:
> >> Do
> >>> not modify the generated servlets
> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> jsp
> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> loaded
> >>> class org.mortbay.jetty.servlet.DefaultServlet
> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> loaded
> >>> class org.mortbay.jetty.servlet.DefaultServlet from
> >>> sun.misc.Launcher$AppClassLoader@23137792
> >>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class org.mortbay.jetty.servlet.DefaultServlet
> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.ResourceCache@5b525b5f
> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> resource base =
> >>> file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> default
> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> >>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> Container org.mortbay.jetty.servlet.Context@4e048dc6
> >>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> >>> + ErrorHandler@7bece8cf as errorHandler
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> filterNameMap={safety=safety}
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> pathFilters=[(F=safety,[/*],[],15)]
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> servletFilterMap=null
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>
> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>
> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting ServletHandler@cf7ea2e
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> ServletHandler@cf7ea2e
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting org.mortbay.jetty.servlet.Context@4e048dc6
> >>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting ErrorHandler@7bece8cf
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> ErrorHandler@7bece8cf
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> safety
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class org.apache.hadoop.http.AdminAuthorizedServlet
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.servlet.Context@4e048dc6
> >>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> Container org.mortbay.jetty.servlet.Context@6e4f7806
> >>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> >>> + ErrorHandler@7ea8ad98 as errorHandler
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> filterNameMap={safety=safety}
> >>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> pathFilters=[(F=safety,[/*],[],15)]
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> servletFilterMap=null
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>
> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting ServletHandler@23510a7e
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> ServletHandler@23510a7e
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting org.mortbay.jetty.servlet.Context@6e4f7806
> >>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting ErrorHandler@7ea8ad98
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> ErrorHandler@7ea8ad98
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> safety
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> Holding
> >>> class org.mortbay.jetty.servlet.DefaultServlet
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.servlet.DefaultServlet-1788226358
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> org.mortbay.jetty.servlet.Context@6e4f7806
> >>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting ContextHandlerCollection@5a4950dd
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> ContextHandlerCollection@5a4950dd
> >>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >>> starting Server@9f51be6
> >>> 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >> started
> >>> or
>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Ted Yu <yu...@gmail.com>.
You don't have the fix. 

You can apply the patch yourself and rebuild hbase. 

Cheers

On Aug 2, 2014, at 11:41 AM, Mingtao Zhang <ma...@gmail.com> wrote:

> Ted, thank you!
> 
> I made the change accordingly.
> 
> I am trying to start zookeeper/hbase cluster only using those (some other
> codes ommited):
> 
>        MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf);
>        zkCluster.setDefaultClientPort(2555);
>        zkCluster.setTickTime(18000);
>        File zkDir = new File(utility.getClusterTestDir().toString());
>        zkCluster.startup(zkDir);
>        utility.setZkCluster(zkCluster);
>        utility.startMiniHBaseCluster(1, 1);
> 
> I could observe https://issues.apache.org/jira/browse/HBASE-10174
> 
> I am in a maven environment with 0.94.15, do I have the patch or not?
> 
> 
> Best Regards,
> Mingtao
> 
> 
> On Sat, Aug 2, 2014 at 11:02 AM, Ted Yu <yu...@gmail.com> wrote:
> 
>> In your config, I see:
>>    <property>
>>        <name>hbase.rootdir</name>
>>        <value>file:///scratch/mingtzha/hbase/test</value>
>>    </property>
>>    <property>
>>        <name>hbase.cluster.distributed</name>
>>        <value>true</value>
>>    </property>
>> The default value for hbase.cluster.distributed is false (for standalone
>> mode).
>> 
>> Since your code is for test, you should keep hbase.cluster.distributed as
>> false.
>> 
>> Cheers
>> 
>> 
>> On Sat, Aug 2, 2014 at 9:51 AM, Mingtao Zhang <ma...@gmail.com>
>> wrote:
>> 
>>> HI,
>>> 
>>> I am really stuck with this. Putting the stack trace, java file,
>> hbase-site
>>> file and pom file here.
>>> 
>>> I have 0 knowledge about hadoop and expecting it's transparent for my
>>> integration test :(.
>>> 
>>> Thanks in advance!
>>> 
>>> Best Regards,
>>> Mingtao
>>> 
>>> The stack trace:
>>> 
>>> 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
>>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
>>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
>>> http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
>>> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Can't
>>> exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
>>> 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd -->
>> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
>>> 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
>>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> Container Server@9f51be6 +
>>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
>>> sessionIdManager
>>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Init
>>> SecureRandom.
>>> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
>>> 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.servlet.HashSessionManager@738f651f
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> filterNameMap={safety=safety, krb5Filter=krb5Filter}
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> pathFilters=[(F=safety,[/*],[],15)]
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> servletFilterMap=null
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
>>> /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
>>> /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
>>> /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp, /data/*=data,
>>> /contentSummary/*=contentSummary,
>>> /renewDelegationToken=renewDelegationToken,
>>> /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
>>> *.JSP=jsp, /jmx=jmx}
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp, jmx=jmx,
>>> data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
>>> cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
>>> default=default, logLevel=logLevel, contentSummary=contentSummary,
>>> getimage=getimage, renewDelegationToken=renewDelegationToken}
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting ServletHandler@3fd5e2ae
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> ServletHandler@3fd5e2ae
>>> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting SecurityHandler@51f35aea
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> SecurityHandler@51f35aea
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting SessionHandler@73152e3f
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> SessionHandler@73152e3f
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting ErrorPageErrorHandler@4b38117e
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> ErrorPageErrorHandler@4b38117e
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>>> class
>>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>>> from sun.misc.Launcher$AppClassLoader@23137792
>>> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class
>>> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
>>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> krb5Filter
>>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
>>> sun.misc.Launcher$AppClassLoader@23137792
>>> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>>> 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> safety
>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> conf
>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> cancelDelegationToken
>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> contentSummary
>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> checksum
>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> data
>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> fsck
>>> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> getDelegationToken
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> getimage
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> listPaths
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> renewDelegationToken
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> stacks
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> jmx
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> logLevel
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>>> class org.apache.jasper.servlet.JspServlet from
>>> sun.misc.Launcher$AppClassLoader@23137792
>>> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class org.apache.jasper.servlet.JspServlet
>>> 09:42:33.250 [main] [39mDEBUG [0;39m
>> [1;35mo.a.j.compiler.JspRuntimeContext
>>> [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext
>> ([])
>>> / sun.misc.Launcher$AppClassLoader@23137792
>>> 09:42:33.252 [main] [39mDEBUG [0;39m
>>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch dir
>>> for the JSP engine is:
>>> /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
>>> 09:42:33.252 [main] [39mDEBUG [0;39m
>>> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT:
>> Do
>>> not modify the generated servlets
>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> jsp
>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>>> class org.mortbay.jetty.servlet.DefaultServlet
>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> loaded
>>> class org.mortbay.jetty.servlet.DefaultServlet from
>>> sun.misc.Launcher$AppClassLoader@23137792
>>> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class org.mortbay.jetty.servlet.DefaultServlet
>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.ResourceCache@5b525b5f
>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> resource base =
>>> file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> default
>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
>>> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> Container org.mortbay.jetty.servlet.Context@4e048dc6
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>>> + ErrorHandler@7bece8cf as errorHandler
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> filterNameMap={safety=safety}
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> pathFilters=[(F=safety,[/*],[],15)]
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> servletFilterMap=null
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting ServletHandler@cf7ea2e
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> ServletHandler@cf7ea2e
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting org.mortbay.jetty.servlet.Context@4e048dc6
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting ErrorHandler@7bece8cf
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> ErrorHandler@7bece8cf
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> safety
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class org.apache.hadoop.http.AdminAuthorizedServlet
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.servlet.Context@4e048dc6
>> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> Container org.mortbay.jetty.servlet.Context@6e4f7806
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>>> + ErrorHandler@7ea8ad98 as errorHandler
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> filterNameMap={safety=safety}
>>> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> pathFilters=[(F=safety,[/*],[],15)]
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> servletFilterMap=null
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting ServletHandler@23510a7e
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> ServletHandler@23510a7e
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting org.mortbay.jetty.servlet.Context@6e4f7806
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting ErrorHandler@7ea8ad98
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> ErrorHandler@7ea8ad98
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> safety
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> Holding
>>> class org.mortbay.jetty.servlet.DefaultServlet
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.servlet.DefaultServlet-1788226358
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> org.mortbay.jetty.servlet.Context@6e4f7806
>> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting ContextHandlerCollection@5a4950dd
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> ContextHandlerCollection@5a4950dd
>>> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>>> starting Server@9f51be6
>>> 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>> started
>>> or

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Mingtao Zhang <ma...@gmail.com>.
Ted, thank you!

I made the change accordingly.

I am trying to start zookeeper/hbase cluster only using those (some other
codes ommited):

        MiniZooKeeperCluster zkCluster = new MiniZooKeeperCluster(conf);
        zkCluster.setDefaultClientPort(2555);
        zkCluster.setTickTime(18000);
        File zkDir = new File(utility.getClusterTestDir().toString());
        zkCluster.startup(zkDir);
        utility.setZkCluster(zkCluster);
        utility.startMiniHBaseCluster(1, 1);

I could observe https://issues.apache.org/jira/browse/HBASE-10174

I am in a maven environment with 0.94.15, do I have the patch or not?


Best Regards,
Mingtao


On Sat, Aug 2, 2014 at 11:02 AM, Ted Yu <yu...@gmail.com> wrote:

> In your config, I see:
>     <property>
>         <name>hbase.rootdir</name>
>         <value>file:///scratch/mingtzha/hbase/test</value>
>     </property>
>     <property>
>         <name>hbase.cluster.distributed</name>
>         <value>true</value>
>     </property>
> The default value for hbase.cluster.distributed is false (for standalone
> mode).
>
> Since your code is for test, you should keep hbase.cluster.distributed as
> false.
>
> Cheers
>
>
> On Sat, Aug 2, 2014 at 9:51 AM, Mingtao Zhang <ma...@gmail.com>
> wrote:
>
> > HI,
> >
> > I am really stuck with this. Putting the stack trace, java file,
> hbase-site
> > file and pom file here.
> >
> > I have 0 knowledge about hadoop and expecting it's transparent for my
> > integration test :(.
> >
> > Thanks in advance!
> >
> > Best Regards,
> > Mingtao
> >
> > The stack trace:
> >
> > 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
> > http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
> > 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Can't
> > exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
> > 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd -->
> >
> >
> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
> > 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Container Server@9f51be6 +
> > org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
> > sessionIdManager
> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Init
> > SecureRandom.
> > 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
> > 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.HashSessionManager@738f651f
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > filterNameMap={safety=safety, krb5Filter=krb5Filter}
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > pathFilters=[(F=safety,[/*],[],15)]
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletFilterMap=null
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
> > /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
> > /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
> > /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp, /data/*=data,
> > /contentSummary/*=contentSummary,
> > /renewDelegationToken=renewDelegationToken,
> > /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
> > *.JSP=jsp, /jmx=jmx}
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp, jmx=jmx,
> > data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
> > cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
> > default=default, logLevel=logLevel, contentSummary=contentSummary,
> > getimage=getimage, renewDelegationToken=renewDelegationToken}
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ServletHandler@3fd5e2ae
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ServletHandler@3fd5e2ae
> > 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting SecurityHandler@51f35aea
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > SecurityHandler@51f35aea
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting SessionHandler@73152e3f
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > SessionHandler@73152e3f
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >
> >
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ErrorPageErrorHandler@4b38117e
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ErrorPageErrorHandler@4b38117e
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class
> > org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> > from sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class
> > org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > krb5Filter
> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
> > sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> > 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > safety
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > conf
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > cancelDelegationToken
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > contentSummary
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > checksum
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > data
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > fsck
> > 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > getDelegationToken
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > getimage
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > listPaths
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > renewDelegationToken
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > stacks
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > jmx
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > logLevel
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.apache.jasper.servlet.JspServlet from
> > sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.jasper.servlet.JspServlet
> > 09:42:33.250 [main] [39mDEBUG [0;39m
> [1;35mo.a.j.compiler.JspRuntimeContext
> > [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext
> ([])
> > / sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.252 [main] [39mDEBUG [0;39m
> > [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch dir
> > for the JSP engine is:
> > /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
> > 09:42:33.252 [main] [39mDEBUG [0;39m
> > [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT:
> Do
> > not modify the generated servlets
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > jsp
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.mortbay.jetty.servlet.DefaultServlet
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> loaded
> > class org.mortbay.jetty.servlet.DefaultServlet from
> > sun.misc.Launcher$AppClassLoader@23137792
> > 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.mortbay.jetty.servlet.DefaultServlet
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.ResourceCache@5b525b5f
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > resource base =
> > file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > default
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.webapp.WebAppContext@7cbc11d
> >
> >
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> > 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Container org.mortbay.jetty.servlet.Context@4e048dc6
> >
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> > + ErrorHandler@7bece8cf as errorHandler
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > filterNameMap={safety=safety}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > pathFilters=[(F=safety,[/*],[],15)]
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletFilterMap=null
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ServletHandler@cf7ea2e
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ServletHandler@cf7ea2e
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting org.mortbay.jetty.servlet.Context@4e048dc6
> >
> >
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ErrorHandler@7bece8cf
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ErrorHandler@7bece8cf
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > safety
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.AdminAuthorizedServlet
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.Context@4e048dc6
> >
> >
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > Container org.mortbay.jetty.servlet.Context@6e4f7806
> >
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> > + ErrorHandler@7ea8ad98 as errorHandler
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > filterNameMap={safety=safety}
> > 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > pathFilters=[(F=safety,[/*],[],15)]
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletFilterMap=null
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> >
> >
> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ServletHandler@23510a7e
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ServletHandler@23510a7e
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting org.mortbay.jetty.servlet.Context@6e4f7806
> >
> >
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ErrorHandler@7ea8ad98
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ErrorHandler@7ea8ad98
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > safety
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Holding
> > class org.mortbay.jetty.servlet.DefaultServlet
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.DefaultServlet-1788226358
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.servlet.Context@6e4f7806
> >
> >
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting ContextHandlerCollection@5a4950dd
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > ContextHandlerCollection@5a4950dd
> > 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> > starting Server@9f51be6
> > 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > org.mortbay.jetty.nio.SelectChannelConnector$1@501a7f06
> > 09:42:33.272 [main] [34mINFO [0;39m [1;35morg.mortbay.log [0;39m -
> Started
> > SelectChannelConnector@localhost.localdomain:1543
> > 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > SelectChannelConnector@localhost.localdomain:1543
> > 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> started
> > Server@9f51be6
> > 09:42:33.273 [main] [34mINFO [0;39m
> > [1;35mo.a.h.hdfs.server.namenode.NameNode [0;39m - Web-server up at:
> > localhost.localdomain:1543
> > 09:42:33.274 [IPC Server listener on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> > starting
> > 09:42:33.274 [IPC Server Responder] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> starting
> > 09:42:33.275 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118:
> > starting
> > 09:42:33.276 [IPC Server handler 1 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on
> 41118:
> > starting
> > 09:42:33.277 [IPC Server handler 3 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on
> 41118:
> > starting
> > 09:42:33.277 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on
> 41118:
> > starting
> > 09:42:33.277 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 5 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 7 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on
> 41118:
> > starting
> > 09:42:33.281 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
> 41118:
> > starting
> > 09:42:33.283 [IPC Server handler 9 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on
> 41118:
> > starting
> > 09:42:33.287 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem
> > [0;39m - Creating filesystem for hdfs://slc05muw.us.**.com:41118
> > 09:42:33.321 [main] [39mDEBUG [0;39m
> > [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> > multipleLinearRandomRetry = null
> > 09:42:33.328 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - The ping interval is60000ms.
> > 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - Use SIMPLE authentication for protocol ClientProtocol
> > 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - Connecting to slc05muw.us.**.com/10.241.3.35:41118
> > 09:42:33.337 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #0
> > 09:42:33.337 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: starting, having
> > connections 1
> > 09:42:33.337 [IPC Server listener on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Server connection from
> > 10.241.3.35:24701; # active connections: 1; # queued calls: 0
> > 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Successfully authorized
> > org.apache.hadoop.hdfs.protocol.ClientProtocol-mingtzha
> > 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #0
> > 09:42:33.338 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118:
> > has #0 from 10.241.3.35:24701
> > 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> > queueTime= 1 procesingTime= 0
> > 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #0 from 10.241.3.35:24701
> > 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #0 from 10.241.3.35:24701 Wrote 22 bytes.
> > 09:42:33.340 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #0
> > 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: getProtocolVersion 17
> > 09:42:33.341 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Short circuit read is false
> > 09:42:33.341 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Connect to datanode via hostname is false
> > 09:42:33.343 [main] [39mDEBUG [0;39m
> > [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> > multipleLinearRandomRetry = null
> > 09:42:33.343 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #1
> > 09:42:33.344 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #1
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on
> 41118:
> > has #1 from 10.241.3.35:24701
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> > queueTime= 0 procesingTime= 0
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #1 from 10.241.3.35:24701
> > 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #1 from 10.241.3.35:24701 Wrote 22 bytes.
> > 09:42:33.344 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #1
> > 09:42:33.344 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: getProtocolVersion 1
> > 09:42:33.345 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Short circuit read is false
> > 09:42:33.345 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - Connect to datanode via hostname is false
> > 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #2
> > 09:42:33.345 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #2
> > 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on
> 41118:
> > has #2 from 10.241.3.35:24701
> > 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning fetched groups
> > for 'mingtzha'
> > 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getDatanodeReport
> > queueTime= 0 procesingTime= 11
> > 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #2 from 10.241.3.35:24701
> > 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #2 from 10.241.3.35:24701 Wrote 61 bytes.
> > 09:42:33.357 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #2
> > 09:42:33.357 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: getDatanodeReport 12
> > Cluster is active
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52
> GMT
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> > host.name=slc05muw.us.**.com
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.version=1.7.0_45
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.vendor=** Corporation
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.home=/scratch/mingtzha/jdk1.7.0_45/jre
> > 09:42:33.376 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> >
> >
> environment:java.class.path=/scratch/mingtzha/eclipses/eclipse/plugins/org.testng.eclipse_6.8.6.20141201_2240/lib/testng.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-integ/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-core/target/classes:/home/mingtzha/.m2/repository/org/testng/testng/6.8.7/testng-6.8.7.jar:/home/mingtzha/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/mingtzha/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/mingtzha/.m2/repository/org/beanshell/bsh/2.0b4/bsh-2.0b4.jar:/home/mingtzha/.m2/repository/com/beust/jcommander/1.27/jcommander-1.27.jar:/home/mingtzha/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/mingtzha/.m2/repository/org/assertj/assertj-core/1.5.0/assertj-core-1.5.0.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-testng/2.3.0-b01/hk2-testng-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2/2.3.0-b01/hk2-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/config-types/2.3.0-b01/config-types-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/core/2.3.0-b01/core-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-config/2.3.0-b01/hk2-config-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/jvnet/tiger-types/1.4/tiger-types-1.4.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/bean-validator/2.3.0-b01/bean-validator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-runlevel/2.3.0-b01/hk2-runlevel-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/class-model/2.3.0-b01/class-model-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/asm-all-repackaged/2.3.0-b01/asm-all-repackaged-2.3.0-b01.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-core/target/classes:/home/mingtzha/.m2/repository/org/yaml/snakeyaml/1.13/snakeyaml-1.13.jar:/home/mingtzha/.m2/repository/org/apache/kafka/kafka_2.10/0.8.0/kafka_2.10-0.8.0.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-library/2.10.1/scala-library-2.10.1.jar:/home/mingtzha/.m2/repository/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-compiler/2.10.1/scala-compiler-2.10.1.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-reflect/2.10.1/scala-reflect-2.10.1.jar:/home/mingtzha/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/mingtzha/.m2/repository/org/xerial/snappy/snappy-java/
> >
> >
> 1.0.4.1/snappy-java-1.0.4.1.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-annotation/2.2.0/metrics-annotation-2.2.0.jar:/home/mingtzha/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/itest-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-model/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-api/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-data/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-avro/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-dev/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-shared/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-spi/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-common/target/classes:/home/mingtzha/.m2/repository/com/googlecode/owasp-java-html-sanitizer/owasp-java-html-sanitizer/r209/owasp-java-html-sanitizer-r209.jar:/home/mingtzha/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/mingtzha/.m2/repository/com/fasterxml/uuid/java-uuid-generator/3.1.3/java-uuid-generator-3.1.3.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-hbase/target/classes:/home/mingtzha/.m2/repository/org/apache/avro/avro/1.7.5/avro-1.7.5.jar:/home/mingtzha/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/mingtzha/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.15/hbase-0.94.15.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.21/hbase-0.94.21-tests.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-core/2.1.2/metrics-core-2.1.2.jar:/home/mingtzha/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/mingtzha/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/mingtzha/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/mingtzha/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/mingtzha/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/mingtzha/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/mingtzha/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/mingtzha/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/mingtzha/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/mingtzha/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/mingtzha/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/mingtzha/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/mingtzha/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/home/mingtzha/.m2/repository/org/apache/thrift/libthrift/0.8.0/libthrift-0.8.0.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpclient/4.1.2/httpclient-4.1.2.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar:/home/mingtzha/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.8/jackson-xc-1.8.8.jar:/home/mingtzha/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/mingtzha/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/mingtzha/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/mingtzha/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-core/1.8/jersey-core-1.8.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-json/1.8/jersey-json-1.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/mingtzha/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar:/home/mingtzha/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/mingtzha/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/mingtzha/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/mingtzha/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar:/home/mingtzha/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/mingtzha/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/mingtzha/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/mingtzha/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/mingtzha/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/mingtzha/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/mingtzha/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-test/1.2.1/hadoop-test-1.2.1.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-ext/1.7.5/slf4j-ext-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/cal10n/cal10n-api/0.7.4/cal10n-api-0.7.4.jar:/home/mingtzha/.m2/repository/org/slf4j/jcl-over-slf4j/1.7.5/jcl-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/log4j-over-slf4j/1.7.5/log4j-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/jul-to-slf4j/1.7.5/jul-to-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-classic/1.0.13/logback-classic-1.0.13.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-core/1.0.13/logback-core-1.0.13.jar:/home/mingtzha/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/mingtzha/.m2/repository/org/fusesource/jansi/jansi/1.11/jansi-1.11.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/config-zookeeper/target/classes:/home/mingtzha/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/mingtzha/.m2/repository/joda-time/joda-time/2.3/joda-time-2.3.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-locator/2.3.0-b01/hk2-locator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/javax.inject/2.3.0-b01/javax.inject-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/aopalliance-repackaged/2.3.0-b01/aopalliance-repackaged-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-api/2.3.0-b01/hk2-api-2.3.0-b01.jar:/home/mingtzha/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-utils/2.3.0-b01/hk2-utils-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> >
> >
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.io.tmpdir=/tmp
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:java.compiler=<NA>
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> > os.name=Linux
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:os.arch=amd64
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:os.version=2.6.39-300.20.1.el6uek.x86_64
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> > user.name=mingtzha
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> > environment:user.home=/home/mingtzha
> > 09:42:33.377 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> >
> >
> environment:user.dir=/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest
> > 09:42:33.380 [main] [39mDEBUG [0;39m
> > [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Opening
> >
> >
> datadir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
> >
> >
> snapDir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
> > 09:42:33.394 [main] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Created server with
> > tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir
> >
> >
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> > snapdir
> >
> >
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> > 09:42:33.400 [main] [34mINFO [0;39m
> [1;35mo.a.z.server.NIOServerCnxnFactory
> > [0;39m - binding to port 0.0.0.0/0.0.0.0:51126
> > 09:42:33.405 [main] [34mINFO [0;39m
> > [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Snapshotting: 0x0 to
> >
> >
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2/snapshot.0
> > 09:42:33.431 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO
> [0;39m
> > [1;35mo.a.z.server.NIOServerCnxnFactory [0;39m - Accepted socket
> connection
> > from /10.241.3.35:44625
> > 09:42:33.437 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO
> [0;39m
> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Processing stat command
> > from /10.241.3.35:44625
> > 09:42:33.442 [Thread-25] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Stat command output
> > 09:42:33.442 [Thread-25] [34mINFO [0;39m
> > [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Closed socket
> connection
> > for client /10.241.3.35:44625 (no session established for client)
> > 09:42:33.442 [main] [34mINFO [0;39m [1;35mo.a.h.h.z.MiniZooKeeperCluster
> > [0;39m - Started MiniZK Cluster and connect 1 ZK server on client port:
> > 51126
> > 09:42:33.443 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase: masked=rwxr-xr-x
> > 09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #3
> > 09:42:33.444 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #3
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on
> 41118:
> > has #3 from 10.241.3.35:24701
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.mkdirs:
> > /user/mingtzha/hbase
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* mkdirs:
> > /user/mingtzha/hbase
> > 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.mkdirs:
> > created directory /user
> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.mkdirs:
> > created directory /user/mingtzha
> > 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.mkdirs:
> > created directory /user/mingtzha/hbase
> > 09:42:33.448 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> > bytes at the end of the edit log (offset 4)
> > 09:42:33.452 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> > bytes at the end of the edit log (offset 4)
> > 09:42:33.455 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> > [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> > 10.241.3.35    cmd=mkdirs    src=/user/mingtzha/hbase    dst=null
> > perm=mingtzha:supergroup:rwxr-xr-x
> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: mkdirs queueTime= 0
> > procesingTime= 10
> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #3 from 10.241.3.35:24701
> > 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #3 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.455 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #3
> > 09:42:33.455 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: mkdirs 12
> > 09:42:33.461 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.468 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.469 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #4
> > 09:42:33.469 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #4
> > 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
> 41118:
> > has #4 from 10.241.3.35:24701
> > 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.479 [IPC Server handler 2 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.480 [IPC Server handler 2 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.480 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #4 from 10.241.3.35:24701
> > 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #4 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.482 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #4
> > 09:42:33.482 [main] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> > retrying: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> > 09:42:33.483 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #5
> > 09:42:33.483 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #5
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on
> 41118:
> > has #5 from 10.241.3.35:24701
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> > src=/user/mingtzha/hbase/hbase.version, recursive=false
> > 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> > FSDirectory.unprotectedDelete: failed to remove
> > /user/mingtzha/hbase/hbase.version because it does not exist
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> > procesingTime= 1
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #5 from 10.241.3.35:24701
> > 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #5 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.484 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #5
> > 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: delete 1
> > 09:42:33.484 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.484 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.485 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #6
> > 09:42:33.485 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #6
> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
> 41118:
> > has #6 from 10.241.3.35:24701
> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.486 [IPC Server handler 6 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.486 [IPC Server handler 6 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.486 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #6 from 10.241.3.35:24701
> > 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #6 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.487 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #6
> > 09:42:33.487 [main] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> > retrying: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> > 09:42:33.487 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #7
> > 09:42:33.488 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #7
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on
> 41118:
> > has #7 from 10.241.3.35:24701
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> > src=/user/mingtzha/hbase/hbase.version, recursive=false
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> > FSDirectory.unprotectedDelete: failed to remove
> > /user/mingtzha/hbase/hbase.version because it does not exist
> > 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> > procesingTime= 0
> > 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #7 from 10.241.3.35:24701
> > 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #7 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.489 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #7
> > 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: delete 2
> > 09:42:33.489 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.489 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #8
> > 09:42:33.489 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #8
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
> 41118:
> > has #8 from 10.241.3.35:24701
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.491 [IPC Server handler 8 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.491 [IPC Server handler 8 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.491 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #8 from 10.241.3.35:24701
> > 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #8 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.492 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #8
> > 09:42:33.492 [main] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> > version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> > retrying: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> > 09:42:33.492 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #9
> > 09:42:33.493 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #9
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on
> 41118:
> > has #9 from 10.241.3.35:24701
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> > src=/user/mingtzha/hbase/hbase.version, recursive=false
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.delete:
> > /user/mingtzha/hbase/hbase.version
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> > FSDirectory.unprotectedDelete: failed to remove
> > /user/mingtzha/hbase/hbase.version because it does not exist
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> > procesingTime= 0
> > 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #9 from 10.241.3.35:24701
> > 09:42:33.494 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #9 from 10.241.3.35:24701 Wrote 18 bytes.
> > 09:42:33.494 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #9
> > 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC
> [0;39m
> > - Call: delete 2
> > 09:42:33.494 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> > 09:42:33.494 [main] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.DFSClient
> > [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> > chunkSize=516, chunksPerPacket=127, packetSize=65557
> > 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118
> > from mingtzha sending #10
> > 09:42:33.494 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #10
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118:
> > has #10 from 10.241.3.35:24701
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> > as:mingtzha
> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> > /user/mingtzha/hbase/hbase.version for
> DFSClient_NONMAPREDUCE_-237185081_1
> > at 10.241.3.35
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> > src=/user/mingtzha/hbase/hbase.version,
> > holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> > createParent=true, replication=0, overwrite=true, append=false
> > 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> > for 'mingtzha'
> > 09:42:33.495 [IPC Server handler 0 on 41118] [31mWARN [0;39m
> > [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.495 [IPC Server handler 0 on 41118] [1;31mERROR [0;39m
> > [1;35mo.a.h.security.UserGroupInformation [0;39m -
> > PriviledgedActionException as:mingtzha cause:java.io.IOException: failed
> to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > 09:42:33.496 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on
> 41118,
> > call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> > DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> > 10.241.3.35:24701: error: java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> > java.io.IOException: failed to create file
> > /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > ~[na:1.7.0_45]
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > ~[na:1.7.0_45]
> >     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at java.security.AccessController.doPrivileged(Native Method)
> > ~[na:1.7.0_45]
> >     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> > ~[hadoop-core-1.2.1.jar:na]
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> > ~[hadoop-core-1.2.1.jar:na]
> > 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #10 from 10.241.3.35:24701
> > 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> > responding to #10 from 10.241.3.35:24701 Wrote 1285 bytes.
> > 09:42:33.497 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #10
> > 09:42:33.497 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> > HBaseTestSample.setup
> > 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
> > HBaseTestSample.testInsert
> > 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> > HBaseTestSample.testInsert
> > FAILED CONFIGURATION: @BeforeMethod setup
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to
> > create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> > Requested replication 0 is less than the required minimum 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> >
> >     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
> >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >     at com.sun.proxy.$Proxy10.create(Unknown Source)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >     at com.sun.proxy.$Proxy10.create(Unknown Source)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3451)
> >     at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:870)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:205)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
> >     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
> >     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
> >     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:360)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:774)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:646)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:628)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:576)
> >     at
> >
> >
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:563)
> >     at
> >
> >
> com.**.sites.analytics.repository.itest.endeca.HBaseTestSample.setup(HBaseTestSample.java:101)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at
> >
> >
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> >     at
> >
> >
> org.testng.internal.MethodInvocationHelper$2.runConfigurationMethod(MethodInvocationHelper.java:292)
> >     at
> >
> >
> org.jvnet.testing.hk2testng.HK2TestListenerAdapter.run(HK2TestListenerAdapter.java:97)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:606)
> >     at
> >
> >
> org.testng.internal.MethodInvocationHelper.invokeConfigurable(MethodInvocationHelper.java:304)
> >     at
> > org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:556)
> >     at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
> >     at org.testng.internal.Invoker.invokeMethod(Invoker.java:653)
> >     at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> >     at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> >     at
> >
> >
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> >     at
> org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> >     at org.testng.TestRunner.privateRun(TestRunner.java:767)
> >     at org.testng.TestRunner.run(TestRunner.java:617)
> >     at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> >     at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
> >     at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
> >     at org.testng.SuiteRunner.run(SuiteRunner.java:240)
> >     at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
> >     at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
> >     at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
> >     at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
> >     at org.testng.TestNG.run(TestNG.java:1057)
> >     at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
> >     at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
> >     at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
> >
> > SKIPPED CONFIGURATION: @AfterMethod destroy
> > SKIPPED: testInsert
> >
> > ===============================================
> >     Default test
> >     Tests run: 1, Failures: 0, Skips: 1
> >     Configuration Failures: 1, Skips: 1
> > ===============================================
> >
> > 09:42:33.535 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
> > [Default suite]
> >
> > ===============================================
> > Default suite
> > Total tests run: 1, Failures: 0, Skips: 1
> > Configuration Failures: 1, Skips: 1
> > ===============================================
> >
> > [TestNG] Time taken by org.testng.reporters.XMLReporter@71aeef97: 6 ms
> > [TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 4 ms
> > [TestNG] Time taken by org.testng.reporters.jq.Main@2b430201: 24 ms
> > [TestNG] Time taken by org.testng.reporters.JUnitReportReporter@3309b429
> :
> > 4
> > ms
> > [TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@7224eaaa:
> 8
> > ms
> > [TestNG] Time taken by org.testng.reporters.EmailableReporter2@53b74706:
> 3
> > ms
> > 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of
> FileSystem
> > cache with 1 elements.
> > 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client
> > [0;39m - Stopping client
> > 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: closed
> > 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> > 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> > slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: stopped, remaining
> > connections 0
> > 09:42:33.589 [pool-1-thread-1] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> > disconnecting client 10.241.3.35:24701. Number of active connections: 1
> > 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
> > hdfs://slc05muw.us.**.com:41118
> > 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> > [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache
> >
> > The java code:
> >
> > import java.io.BufferedReader;
> > import java.io.InputStreamReader;
> >
> > import org.apache.hadoop.conf.Configuration;
> > import org.apache.hadoop.hbase.HBaseConfiguration;
> > import org.apache.hadoop.hbase.HBaseTestingUtility;
> > import org.apache.hadoop.hbase.client.Delete;
> > import org.apache.hadoop.hbase.client.Get;
> > import org.apache.hadoop.hbase.client.HTable;
> > import org.apache.hadoop.hbase.client.Put;
> > import org.apache.hadoop.hbase.client.Result;
> > import org.apache.hadoop.hbase.util.Bytes;
> > import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
> > import org.testng.annotations.AfterMethod;
> > import org.testng.annotations.BeforeMethod;
> > import org.testng.annotations.Test;
> >
> > public class HBaseTestSample {
> >
> >     private static HBaseTestingUtility utility;
> >     byte[] CF = "CF".getBytes();
> >     byte[] QUALIFIER = "CQ-1".getBytes();
> >
> >     @BeforeMethod
> >     public void setup() throws Exception {
> >         Configuration hbaseConf = HBaseConfiguration.create();
> >
> >         utility = new HBaseTestingUtility(hbaseConf);
> >
> >         Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
> >         BufferedReader br = new BufferedReader(new InputStreamReader(
> >                 process.getInputStream()));
> >         int rc = process.waitFor();
> >         if (rc == 0) {
> >             String umask = br.readLine();
> >
> >             int umaskBits = Integer.parseInt(umask, 8);
> >             int permBits = 0777 & ~umaskBits;
> >             String perms = Integer.toString(permBits, 8);
> >
> >             utility.getConfiguration().set("dfs.datanode.data.dir.perm",
> > perms);
> >         }
> >
> >         utility.startMiniCluster(0);
> >
> >     }
> >
> >     @Test
> >     public void testInsert() throws Exception {
> >         HTable table = utility.createTable(CF, QUALIFIER);
> >
> >         System.out.println("create table t-f");
> >
> >         // byte [] family, byte [] qualifier, byte [] value
> >         table.put(new Put("r".getBytes()).add("f".getBytes(),
> > "c1".getBytes(),
> >                 "v".getBytes()));
> >         Result result = table.get(new Get("r".getBytes()));
> >
> >         System.out.println(result.list().size());
> >
> >         table.delete(new Delete("r".getBytes()));
> >
> >         System.out.println("clean up");
> >
> >     }
> >
> >     @AfterMethod
> >     public void destroy() throws Exception {
> >         utility.cleanupTestDir();
> >     }
> > }
> >
> > hbase-site.xml:
> >
> > <?xml version="1.0"?>
> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > <configuration>
> >     <property>
> >         <name>hbase.rootdir</name>
> >         <value>file:///scratch/mingtzha/hbase/test</value>
> >     </property>
> >     <property>
> >         <name>hbase.tmp.dir</name>
> >         <value>/tmp/hbase</value>
> >     </property>
> >
> >     <property>
> >         <name>hbase.zookeeper.quorum</name>
> >         <value>localhost</value>
> >     </property>
> >     <property>
> >         <name>hbase.cluster.distributed</name>
> >         <value>true</value>
> >     </property>
> >     <property>
> >         <name>hbase.ipc.warn.response.time</name>
> >         <value>1</value>
> >     </property>
> >
> >     <!-- http://hbase.apache.org/book/ops.monitoring.html -->
> >     <!-- -1 => Disable logging by size -->
> >     <!-- <property> -->
> >     <!-- <name>hbase.ipc.warn.response.size</name> -->
> >     <!-- <value>-1</value> -->
> >     <!-- </property> -->
> > </configuration>
> >
> > pom.xml
> >
> > <?xml version="1.0" encoding="UTF-8"?>
> > <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="
> > http://www.w3.org/2001/XMLSchema-instance"
> >     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
> > http://maven.apache.org/xsd/maven-4.0.0.xsd">
> >     <modelVersion>4.0.0</modelVersion>
> >     <parent>
> >         <groupId>com.**.sites.analytics.tests</groupId>
> >         <artifactId>integration-test</artifactId>
> >         <version>1.0-SNAPSHOT</version>
> >     </parent>
> >
> >     <artifactId>repository-itest</artifactId>
> >     <name>repository-itest</name>
> >
> >     <dependencies>
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>test-integ</artifactId>
> >             <version>${project.version}</version>
> >             <scope>test</scope>
> >         </dependency>
> >         <dependency>
> >             <groupId>com.**.sites.analytics.tests</groupId>
> >             <artifactId>itest-core</artifactId>
> >             <version>${project.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>config-dev</artifactId>
> >             <version>${project.version}</version>
> >             <scope>test</scope>
> >         </dependency>
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>repository-core</artifactId>
> >             <version>${project.version}</version>
> >         </dependency>
> >
> >         <dependency>
> >             <groupId>com.**.sites.analytics</groupId>
> >             <artifactId>repository-hbase</artifactId>
> >             <version>${project.version}</version>
> >         </dependency>
> >
> >         <dependency>
> >             <groupId>org.apache.hbase</groupId>
> >             <artifactId>hbase</artifactId>
> >             <version>0.94.21</version>
> >             <classifier>tests</classifier>
> >             <exclusions>
> >                 <exclusion>
> >                     <artifactId>slf4j-log4j12</artifactId>
> >                     <groupId>org.slf4j</groupId>
> >                 </exclusion>
> >             </exclusions>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.hadoop</groupId>
> >             <artifactId>hadoop-core</artifactId>
> >             <version>1.2.1</version>
> >             <exclusions>
> >                 <exclusion>
> >                     <artifactId>slf4j-log4j12</artifactId>
> >                     <groupId>org.slf4j</groupId>
> >                 </exclusion>
> >             </exclusions>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.hadoop</groupId>
> >             <artifactId>hadoop-test</artifactId>
> >             <version>1.2.1</version>
> >             <exclusions>
> >                 <exclusion>
> >                     <artifactId>slf4j-log4j12</artifactId>
> >                     <groupId>org.slf4j</groupId>
> >                 </exclusion>
> >             </exclusions>
> >         </dependency>
> >     </dependencies>
> > </project>
> >
>

Re: HBaseTestingUtility startMiniCluster throw exception

Posted by Ted Yu <yu...@gmail.com>.
In your config, I see:
    <property>
        <name>hbase.rootdir</name>
        <value>file:///scratch/mingtzha/hbase/test</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
The default value for hbase.cluster.distributed is false (for standalone
mode).

Since your code is for test, you should keep hbase.cluster.distributed as
false.

Cheers


On Sat, Aug 2, 2014 at 9:51 AM, Mingtao Zhang <ma...@gmail.com>
wrote:

> HI,
>
> I am really stuck with this. Putting the stack trace, java file, hbase-site
> file and pom file here.
>
> I have 0 knowledge about hadoop and expecting it's transparent for my
> integration test :(.
>
> Thanks in advance!
>
> Best Regards,
> Mingtao
>
> The stack trace:
>
> 09:42:33.191 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/x.tld
> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/c-1_0-rt.tld
> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN,
> http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)
> 09:42:33.194 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Can't
> exact match entity in redirect map, trying web-jsptaglibrary_1_2.dtd
> 09:42:33.195 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Redirected entity http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd -->
>
> jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd
> 09:42:33.200 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> TLD=jar:file:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar!/META-INF/fmt.tld
> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Container Server@9f51be6 +
> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565 as
> sessionIdManager
> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Init
> SecureRandom.
> 09:42:33.204 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.HashSessionIdManager@445e0565
> 09:42:33.205 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.HashSessionManager@738f651f
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> filterNameMap={safety=safety, krb5Filter=krb5Filter}
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> pathFilters=[(F=safety,[/*],[],15)]
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletFilterMap=null
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletPathMap={*.XSP=jsp, *.jsp=jsp, /getimage=getimage,
> /cancelDelegationToken=cancelDelegationToken, *.JSPF=jsp, *.jspx=jsp,
> /listPaths/*=listPaths, /conf=conf, *.xsp=jsp, /=default, /fsck=fsck,
> /stacks=stacks, /logLevel=logLevel, *.JSPX=jsp, *.jspf=jsp, /data/*=data,
> /contentSummary/*=contentSummary,
> /renewDelegationToken=renewDelegationToken,
> /getDelegationToken=getDelegationToken, /fileChecksum/*=checksum,
> *.JSP=jsp, /jmx=jmx}
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletNameMap={getDelegationToken=getDelegationToken, jsp=jsp, jmx=jmx,
> data=data, checksum=checksum, conf=conf, stacks=stacks, fsck=fsck,
> cancelDelegationToken=cancelDelegationToken, listPaths=listPaths,
> default=default, logLevel=logLevel, contentSummary=contentSummary,
> getimage=getimage, renewDelegationToken=renewDelegationToken}
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ServletHandler@3fd5e2ae
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ServletHandler@3fd5e2ae
> 09:42:33.206 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting SecurityHandler@51f35aea
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> SecurityHandler@51f35aea
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting SessionHandler@73152e3f
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> SessionHandler@73152e3f
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ErrorPageErrorHandler@4b38117e
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ErrorPageErrorHandler@4b38117e
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class
> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> from sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.207 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class
> org.apache.hadoop.security.Krb5AndCertsSslSocketConnector$Krb5SslFilter
> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> krb5Filter
> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter from
> sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.208 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> 09:42:33.210 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> safety
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> conf
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> cancelDelegationToken
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> contentSummary
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> checksum
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> data
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> fsck
> 09:42:33.211 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> getDelegationToken
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> getimage
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> listPaths
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> renewDelegationToken
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> stacks
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> jmx
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> logLevel
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.apache.jasper.servlet.JspServlet from
> sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.212 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.jasper.servlet.JspServlet
> 09:42:33.250 [main] [39mDEBUG [0;39m [1;35mo.a.j.compiler.JspRuntimeContext
> [0;39m - PWC5965: Parent class loader is: ContextLoader@WepAppsContext([])
> / sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.252 [main] [39mDEBUG [0;39m
> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5964: Scratch dir
> for the JSP engine is:
> /tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/jsp
> 09:42:33.252 [main] [39mDEBUG [0;39m
> [1;35morg.apache.jasper.servlet.JspServlet [0;39m - PWC5966: IMPORTANT: Do
> not modify the generated servlets
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> jsp
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.mortbay.jetty.servlet.DefaultServlet
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - loaded
> class org.mortbay.jetty.servlet.DefaultServlet from
> sun.misc.Launcher$AppClassLoader@23137792
> 09:42:33.252 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.mortbay.jetty.servlet.DefaultServlet
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@576f8821
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.ResourceCache@5b525b5f
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> resource base =
> file:/tmp/Jetty_localhost_localdomain_1543_hdfs____.om70mh/webapp/
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> default
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.webapp.WebAppContext@7cbc11d
>
> {/,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/hdfs}
> 09:42:33.258 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Container org.mortbay.jetty.servlet.Context@4e048dc6
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> + ErrorHandler@7bece8cf as errorHandler
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> filterNameMap={safety=safety}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> pathFilters=[(F=safety,[/*],[],15)]
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletFilterMap=null
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletPathMap={/=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> servletNameMap={org.apache.hadoop.http.AdminAuthorizedServlet-1117590713=org.apache.hadoop.http.AdminAuthorizedServlet-1117590713}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ServletHandler@cf7ea2e
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ServletHandler@cf7ea2e
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting org.mortbay.jetty.servlet.Context@4e048dc6
>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ErrorHandler@7bece8cf
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ErrorHandler@7bece8cf
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> safety
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.AdminAuthorizedServlet
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.apache.hadoop.http.AdminAuthorizedServlet-1117590713
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.Context@4e048dc6
>
> {/logs,file:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/hadoop-log-dir}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> Container org.mortbay.jetty.servlet.Context@6e4f7806
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> + ErrorHandler@7ea8ad98 as errorHandler
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> filterNameMap={safety=safety}
> 09:42:33.259 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> pathFilters=[(F=safety,[/*],[],15)]
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletFilterMap=null
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> servletPathMap={/*=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
>
> servletNameMap={org.mortbay.jetty.servlet.DefaultServlet-1788226358=org.mortbay.jetty.servlet.DefaultServlet-1788226358}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ServletHandler@23510a7e
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ServletHandler@23510a7e
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting org.mortbay.jetty.servlet.Context@6e4f7806
>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ErrorHandler@7ea8ad98
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ErrorHandler@7ea8ad98
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.apache.hadoop.http.HttpServer$QuotingInputFilter
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> safety
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - Holding
> class org.mortbay.jetty.servlet.DefaultServlet
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.DefaultServlet-1788226358
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.servlet.Context@6e4f7806
>
> {/static,jar:file:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar!/webapps/static}
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting ContextHandlerCollection@5a4950dd
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> ContextHandlerCollection@5a4950dd
> 09:42:33.260 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m -
> starting Server@9f51be6
> 09:42:33.264 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> org.mortbay.jetty.nio.SelectChannelConnector$1@501a7f06
> 09:42:33.272 [main] [34mINFO [0;39m [1;35morg.mortbay.log [0;39m - Started
> SelectChannelConnector@localhost.localdomain:1543
> 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> SelectChannelConnector@localhost.localdomain:1543
> 09:42:33.273 [main] [39mDEBUG [0;39m [1;35morg.mortbay.log [0;39m - started
> Server@9f51be6
> 09:42:33.273 [main] [34mINFO [0;39m
> [1;35mo.a.h.hdfs.server.namenode.NameNode [0;39m - Web-server up at:
> localhost.localdomain:1543
> 09:42:33.274 [IPC Server listener on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> starting
> 09:42:33.274 [IPC Server Responder] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder: starting
> 09:42:33.275 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
> starting
> 09:42:33.276 [IPC Server handler 1 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 41118:
> starting
> 09:42:33.277 [IPC Server handler 3 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 41118:
> starting
> 09:42:33.277 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 41118:
> starting
> 09:42:33.277 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 5 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 7 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 41118:
> starting
> 09:42:33.281 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118:
> starting
> 09:42:33.283 [IPC Server handler 9 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 41118:
> starting
> 09:42:33.287 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.fs.FileSystem
> [0;39m - Creating filesystem for hdfs://slc05muw.us.**.com:41118
> 09:42:33.321 [main] [39mDEBUG [0;39m
> [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> multipleLinearRandomRetry = null
> 09:42:33.328 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - The ping interval is60000ms.
> 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - Use SIMPLE authentication for protocol ClientProtocol
> 09:42:33.330 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - Connecting to slc05muw.us.**.com/10.241.3.35:41118
> 09:42:33.337 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #0
> 09:42:33.337 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: starting, having
> connections 1
> 09:42:33.337 [IPC Server listener on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Server connection from
> 10.241.3.35:24701; # active connections: 1; # queued calls: 0
> 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Successfully authorized
> org.apache.hadoop.hdfs.protocol.ClientProtocol-mingtzha
> 09:42:33.338 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #0
> 09:42:33.338 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
> has #0 from 10.241.3.35:24701
> 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.339 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> queueTime= 1 procesingTime= 0
> 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #0 from 10.241.3.35:24701
> 09:42:33.340 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #0 from 10.241.3.35:24701 Wrote 22 bytes.
> 09:42:33.340 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #0
> 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: getProtocolVersion 17
> 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Short circuit read is false
> 09:42:33.341 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Connect to datanode via hostname is false
> 09:42:33.343 [main] [39mDEBUG [0;39m
> [1;35mo.apache.hadoop.io.retry.RetryUtils [0;39m -
> multipleLinearRandomRetry = null
> 09:42:33.343 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #1
> 09:42:33.344 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #1
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 1 on 41118:
> has #1 from 10.241.3.35:24701
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getProtocolVersion
> queueTime= 0 procesingTime= 0
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #1 from 10.241.3.35:24701
> 09:42:33.344 [IPC Server handler 1 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #1 from 10.241.3.35:24701 Wrote 22 bytes.
> 09:42:33.344 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #1
> 09:42:33.344 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: getProtocolVersion 1
> 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Short circuit read is false
> 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - Connect to datanode via hostname is false
> 09:42:33.345 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #2
> 09:42:33.345 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #2
> 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 3 on 41118:
> has #2 from 10.241.3.35:24701
> 09:42:33.345 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning fetched groups
> for 'mingtzha'
> 09:42:33.356 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: getDatanodeReport
> queueTime= 0 procesingTime= 11
> 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #2 from 10.241.3.35:24701
> 09:42:33.357 [IPC Server handler 3 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #2 from 10.241.3.35:24701 Wrote 61 bytes.
> 09:42:33.357 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #2
> 09:42:33.357 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: getDatanodeReport 12
> Cluster is active
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> host.name=slc05muw.us.**.com
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.version=1.7.0_45
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.vendor=** Corporation
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.home=/scratch/mingtzha/jdk1.7.0_45/jre
> 09:42:33.376 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>
> environment:java.class.path=/scratch/mingtzha/eclipses/eclipse/plugins/org.testng.eclipse_6.8.6.20141201_2240/lib/testng.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-integ/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/test/test-core/target/classes:/home/mingtzha/.m2/repository/org/testng/testng/6.8.7/testng-6.8.7.jar:/home/mingtzha/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/mingtzha/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/mingtzha/.m2/repository/org/beanshell/bsh/2.0b4/bsh-2.0b4.jar:/home/mingtzha/.m2/repository/com/beust/jcommander/1.27/jcommander-1.27.jar:/home/mingtzha/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/mingtzha/.m2/repository/org/assertj/assertj-core/1.5.0/assertj-core-1.5.0.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-testng/2.3.0-b01/hk2-testng-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2/2.3.0-b01/hk2-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/config-types/2.3.0-b01/config-types-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/core/2.3.0-b01/core-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-config/2.3.0-b01/hk2-config-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/jvnet/tiger-types/1.4/tiger-types-1.4.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/bean-validator/2.3.0-b01/bean-validator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-runlevel/2.3.0-b01/hk2-runlevel-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/class-model/2.3.0-b01/class-model-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/asm-all-repackaged/2.3.0-b01/asm-all-repackaged-2.3.0-b01.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-core/target/classes:/home/mingtzha/.m2/repository/org/yaml/snakeyaml/1.13/snakeyaml-1.13.jar:/home/mingtzha/.m2/repository/org/apache/kafka/kafka_2.10/0.8.0/kafka_2.10-0.8.0.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-library/2.10.1/scala-library-2.10.1.jar:/home/mingtzha/.m2/repository/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-compiler/2.10.1/scala-compiler-2.10.1.jar:/home/mingtzha/.m2/repository/org/scala-lang/scala-reflect/2.10.1/scala-reflect-2.10.1.jar:/home/mingtzha/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/mingtzha/.m2/repository/org/xerial/snappy/snappy-java/
>
> 1.0.4.1/snappy-java-1.0.4.1.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-annotation/2.2.0/metrics-annotation-2.2.0.jar:/home/mingtzha/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/itest-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-model/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-api/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-data/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-avro/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-dev/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/config/config-shared/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-core/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-spi/target/classes:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/core/core-common/target/classes:/home/mingtzha/.m2/repository/com/googlecode/owasp-java-html-sanitizer/owasp-java-html-sanitizer/r209/owasp-java-html-sanitizer-r209.jar:/home/mingtzha/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/mingtzha/.m2/repository/com/fasterxml/uuid/java-uuid-generator/3.1.3/java-uuid-generator-3.1.3.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/app/repository/repository-hbase/target/classes:/home/mingtzha/.m2/repository/org/apache/avro/avro/1.7.5/avro-1.7.5.jar:/home/mingtzha/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/mingtzha/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.15/hbase-0.94.15.jar:/home/mingtzha/.m2/repository/org/apache/hbase/hbase/0.94.21/hbase-0.94.21-tests.jar:/home/mingtzha/.m2/repository/com/yammer/metrics/metrics-core/2.1.2/metrics-core-2.1.2.jar:/home/mingtzha/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/mingtzha/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/mingtzha/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/mingtzha/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/mingtzha/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/mingtzha/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/mingtzha/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/mingtzha/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/mingtzha/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/mingtzha/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/mingtzha/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/mingtzha/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/mingtzha/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/mingtzha/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/home/mingtzha/.m2/repository/org/apache/thrift/libthrift/0.8.0/libthrift-0.8.0.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpclient/4.1.2/httpclient-4.1.2.jar:/home/mingtzha/.m2/repository/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar:/home/mingtzha/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/mingtzha/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.8/jackson-xc-1.8.8.jar:/home/mingtzha/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/mingtzha/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/mingtzha/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/mingtzha/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-core/1.8/jersey-core-1.8.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-json/1.8/jersey-json-1.8.jar:/home/mingtzha/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/mingtzha/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/mingtzha/.m2/repository/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar:/home/mingtzha/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/mingtzha/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/mingtzha/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/mingtzha/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-core/1.2.1/hadoop-core-1.2.1.jar:/home/mingtzha/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/mingtzha/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/mingtzha/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/mingtzha/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/mingtzha/.m2/repository/net/java/dev/jets3t/jets3t/0.6.1/jets3t-0.6.1.jar:/home/mingtzha/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/mingtzha/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/mingtzha/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/mingtzha/.m2/repository/org/apache/hadoop/hadoop-test/1.2.1/hadoop-test-1.2.1.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/mingtzha/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/slf4j-ext/1.7.5/slf4j-ext-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/cal10n/cal10n-api/0.7.4/cal10n-api-0.7.4.jar:/home/mingtzha/.m2/repository/org/slf4j/jcl-over-slf4j/1.7.5/jcl-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/log4j-over-slf4j/1.7.5/log4j-over-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/org/slf4j/jul-to-slf4j/1.7.5/jul-to-slf4j-1.7.5.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-classic/1.0.13/logback-classic-1.0.13.jar:/home/mingtzha/.m2/repository/ch/qos/logback/logback-core/1.0.13/logback-core-1.0.13.jar:/home/mingtzha/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/mingtzha/.m2/repository/org/fusesource/jansi/jansi/1.11/jansi-1.11.jar:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/config-zookeeper/target/classes:/home/mingtzha/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/mingtzha/.m2/repository/joda-time/joda-time/2.3/joda-time-2.3.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-locator/2.3.0-b01/hk2-locator-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/javax.inject/2.3.0-b01/javax.inject-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/external/aopalliance-repackaged/2.3.0-b01/aopalliance-repackaged-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-api/2.3.0-b01/hk2-api-2.3.0-b01.jar:/home/mingtzha/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/mingtzha/.m2/repository/org/glassfish/hk2/hk2-utils/2.3.0-b01/hk2-utils-2.3.0-b01.jar:/home/mingtzha/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.io.tmpdir=/tmp
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:java.compiler=<NA>
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> os.name=Linux
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:os.arch=amd64
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:os.version=2.6.39-300.20.1.el6uek.x86_64
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server environment:
> user.name=mingtzha
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
> environment:user.home=/home/mingtzha
> 09:42:33.377 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Server
>
> environment:user.dir=/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest
> 09:42:33.380 [main] [39mDEBUG [0;39m
> [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Opening
>
> datadir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
>
> snapDir:/scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0
> 09:42:33.394 [main] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.ZooKeeperServer [0;39m - Created server with
> tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir
>
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> snapdir
>
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2
> 09:42:33.400 [main] [34mINFO [0;39m [1;35mo.a.z.server.NIOServerCnxnFactory
> [0;39m - binding to port 0.0.0.0/0.0.0.0:51126
> 09:42:33.405 [main] [34mINFO [0;39m
> [1;35mo.a.z.s.persistence.FileTxnSnapLog [0;39m - Snapshotting: 0x0 to
>
> /scratch/mingtzha/repository/12.1/branches/12.1.4.0.3/webcentersites/siteanalytics/integration-test/repository-itest/target/test-data/830f8900-2879-4ed0-b011-550620ca032f/dfscluster_de01abd7-7001-4642-9a00-f1100be0d193/zookeeper_0/version-2/snapshot.0
> 09:42:33.431 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO [0;39m
> [1;35mo.a.z.server.NIOServerCnxnFactory [0;39m - Accepted socket connection
> from /10.241.3.35:44625
> 09:42:33.437 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:51126] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Processing stat command
> from /10.241.3.35:44625
> 09:42:33.442 [Thread-25] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Stat command output
> 09:42:33.442 [Thread-25] [34mINFO [0;39m
> [1;35mo.a.zookeeper.server.NIOServerCnxn [0;39m - Closed socket connection
> for client /10.241.3.35:44625 (no session established for client)
> 09:42:33.442 [main] [34mINFO [0;39m [1;35mo.a.h.h.z.MiniZooKeeperCluster
> [0;39m - Started MiniZK Cluster and connect 1 ZK server on client port:
> 51126
> 09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase: masked=rwxr-xr-x
> 09:42:33.443 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #3
> 09:42:33.444 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #3
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 4 on 41118:
> has #3 from 10.241.3.35:24701
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.mkdirs:
> /user/mingtzha/hbase
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* mkdirs:
> /user/mingtzha/hbase
> 09:42:33.445 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
> created directory /user
> 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
> created directory /user/mingtzha
> 09:42:33.447 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.mkdirs:
> created directory /user/mingtzha/hbase
> 09:42:33.448 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> bytes at the end of the edit log (offset 4)
> 09:42:33.452 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.h.server.namenode.FSNamesystem [0;39m - Preallocated 1048576
> bytes at the end of the edit log (offset 4)
> 09:42:33.455 [IPC Server handler 4 on 41118] [34mINFO [0;39m
> [1;35mo.a.h.h.s.n.FSNamesystem.audit [0;39m - ugi=mingtzha    ip=/
> 10.241.3.35    cmd=mkdirs    src=/user/mingtzha/hbase    dst=null
> perm=mingtzha:supergroup:rwxr-xr-x
> 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: mkdirs queueTime= 0
> procesingTime= 10
> 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #3 from 10.241.3.35:24701
> 09:42:33.455 [IPC Server handler 4 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #3 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.455 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #3
> 09:42:33.455 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: mkdirs 12
> 09:42:33.461 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.468 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.469 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #4
> 09:42:33.469 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #4
> 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118:
> has #4 from 10.241.3.35:24701
> 09:42:33.470 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.479 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.479 [IPC Server handler 2 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.480 [IPC Server handler 2 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.480 [IPC Server handler 2 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 2 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #4 from 10.241.3.35:24701
> 09:42:33.481 [IPC Server handler 2 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #4 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.482 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #4
> 09:42:33.482 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> retrying: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 09:42:33.483 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #5
> 09:42:33.483 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #5
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 5 on 41118:
> has #5 from 10.241.3.35:24701
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 09:42:33.483 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /user/mingtzha/hbase/hbase.version because it does not exist
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 1
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #5 from 10.241.3.35:24701
> 09:42:33.484 [IPC Server handler 5 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #5 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.484 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #5
> 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: delete 1
> 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.484 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.485 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #6
> 09:42:33.485 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #6
> 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118:
> has #6 from 10.241.3.35:24701
> 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.485 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.486 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.486 [IPC Server handler 6 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.486 [IPC Server handler 6 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.486 [IPC Server handler 6 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 6 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #6 from 10.241.3.35:24701
> 09:42:33.487 [IPC Server handler 6 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #6 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.487 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #6
> 09:42:33.487 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> retrying: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 09:42:33.487 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #7
> 09:42:33.488 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #7
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 7 on 41118:
> has #7 from 10.241.3.35:24701
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /user/mingtzha/hbase/hbase.version because it does not exist
> 09:42:33.488 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 0
> 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #7 from 10.241.3.35:24701
> 09:42:33.489 [IPC Server handler 7 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #7 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.489 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #7
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: delete 2
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.489 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #8
> 09:42:33.489 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #8
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118:
> has #8 from 10.241.3.35:24701
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.490 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.491 [IPC Server handler 8 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.491 [IPC Server handler 8 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.491 [IPC Server handler 8 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 8 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #8 from 10.241.3.35:24701
> 09:42:33.492 [IPC Server handler 8 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #8 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.492 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #8
> 09:42:33.492 [main] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hbase.util.FSUtils [0;39m - Unable to create
> version file at hdfs://slc05muw.us.**.com:41118/user/mingtzha/hbase,
> retrying: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
> 09:42:33.492 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #9
> 09:42:33.493 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #9
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 9 on 41118:
> has #9 from 10.241.3.35:24701
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* Namenode.delete:
> src=/user/mingtzha/hbase/hbase.version, recursive=false
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* FSDirectory.delete:
> /user/mingtzha/hbase/hbase.version
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /user/mingtzha/hbase/hbase.version because it does not exist
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - Served: delete queueTime= 0
> procesingTime= 0
> 09:42:33.493 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #9 from 10.241.3.35:24701
> 09:42:33.494 [IPC Server handler 9 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #9 from 10.241.3.35:24701 Wrote 18 bytes.
> 09:42:33.494 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #9
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.RPC [0;39m
> - Call: delete 2
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - /user/mingtzha/hbase/hbase.version: masked=rwxr-xr-x
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.hdfs.DFSClient
> [0;39m - computePacketChunkSize: src=/user/mingtzha/hbase/hbase.version,
> chunkSize=516, chunksPerPacket=127, packetSize=65557
> 09:42:33.494 [main] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118
> from mingtzha sending #10
> 09:42:33.494 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m -  got #10
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118:
> has #10 from 10.241.3.35:24701
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m - PriviledgedAction
> as:mingtzha from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - *DIR* NameNode.create:
> /user/mingtzha/hbase/hbase.version for DFSClient_NONMAPREDUCE_-237185081_1
> at 10.241.3.35
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile:
> src=/user/mingtzha/hbase/hbase.version,
> holder=DFSClient_NONMAPREDUCE_-237185081_1, clientMachine=10.241.3.35,
> createParent=true, replication=0, overwrite=true, append=false
> 09:42:33.495 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.security.Groups [0;39m - Returning cached groups
> for 'mingtzha'
> 09:42:33.495 [IPC Server handler 0 on 41118] [31mWARN [0;39m
> [1;35morg.apache.hadoop.hdfs.StateChange [0;39m - DIR* startFile: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.495 [IPC Server handler 0 on 41118] [1;31mERROR [0;39m
> [1;35mo.a.h.security.UserGroupInformation [0;39m -
> PriviledgedActionException as:mingtzha cause:java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> 09:42:33.496 [IPC Server handler 0 on 41118] [34mINFO [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server handler 0 on 41118,
> call create(/user/mingtzha/hbase/hbase.version, rwxr-xr-x,
> DFSClient_NONMAPREDUCE_-237185081_1, true, 0, 67108864) from
> 10.241.3.35:24701: error: java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
> java.io.IOException: failed to create file
> /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
> ~[hadoop-core-1.2.1.jar:na]
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
> ~[hadoop-core-1.2.1.jar:na]
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
> ~[hadoop-core-1.2.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> ~[na:1.7.0_45]
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.7.0_45]
>     at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
> ~[hadoop-core-1.2.1.jar:na]
>     at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_45]
>     at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> ~[hadoop-core-1.2.1.jar:na]
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
> ~[hadoop-core-1.2.1.jar:na]
> 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #10 from 10.241.3.35:24701
> 09:42:33.496 [IPC Server handler 0 on 41118] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server Responder:
> responding to #10 from 10.241.3.35:24701 Wrote 1285 bytes.
> 09:42:33.497 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha got value #10
> 09:42:33.497 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> HBaseTestSample.setup
> 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Started
> HBaseTestSample.testInsert
> 09:42:33.506 [main] [34mINFO [0;39m [1;35mtest [0;39m -  > Finished
> HBaseTestSample.testInsert
> FAILED CONFIGURATION: @BeforeMethod setup
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to
> create file /user/mingtzha/hbase/hbase.version on client 10.241.3.35.
> Requested replication 0 is less than the required minimum 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1591)
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1527)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:710)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:689)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>     at com.sun.proxy.$Proxy10.create(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>     at com.sun.proxy.$Proxy10.create(Unknown Source)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3451)
>     at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:870)
>     at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:205)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
>     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
>     at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:360)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:774)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:646)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:628)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:576)
>     at
>
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:563)
>     at
>
> com.**.sites.analytics.repository.itest.endeca.HBaseTestSample.setup(HBaseTestSample.java:101)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
>
> org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
>     at
>
> org.testng.internal.MethodInvocationHelper$2.runConfigurationMethod(MethodInvocationHelper.java:292)
>     at
>
> org.jvnet.testing.hk2testng.HK2TestListenerAdapter.run(HK2TestListenerAdapter.java:97)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
>
> org.testng.internal.MethodInvocationHelper.invokeConfigurable(MethodInvocationHelper.java:304)
>     at
> org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:556)
>     at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
>     at org.testng.internal.Invoker.invokeMethod(Invoker.java:653)
>     at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
>     at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
>     at
>
> org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
>     at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
>     at org.testng.TestRunner.privateRun(TestRunner.java:767)
>     at org.testng.TestRunner.run(TestRunner.java:617)
>     at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
>     at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
>     at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
>     at org.testng.SuiteRunner.run(SuiteRunner.java:240)
>     at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
>     at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
>     at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
>     at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
>     at org.testng.TestNG.run(TestNG.java:1057)
>     at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
>     at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
>     at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
>
> SKIPPED CONFIGURATION: @AfterMethod destroy
> SKIPPED: testInsert
>
> ===============================================
>     Default test
>     Tests run: 1, Failures: 0, Skips: 1
>     Configuration Failures: 1, Skips: 1
> ===============================================
>
> 09:42:33.535 [main] [34mINFO [0;39m [1;35mtest [0;39m - Finished Suite
> [Default suite]
>
> ===============================================
> Default suite
> Total tests run: 1, Failures: 0, Skips: 1
> Configuration Failures: 1, Skips: 1
> ===============================================
>
> [TestNG] Time taken by org.testng.reporters.XMLReporter@71aeef97: 6 ms
> [TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 4 ms
> [TestNG] Time taken by org.testng.reporters.jq.Main@2b430201: 24 ms
> [TestNG] Time taken by org.testng.reporters.JUnitReportReporter@3309b429:
> 4
> ms
> [TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@7224eaaa: 8
> ms
> [TestNG] Time taken by org.testng.reporters.EmailableReporter2@53b74706: 3
> ms
> 09:42:33.588 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Starting clear of FileSystem
> cache with 1 elements.
> 09:42:33.588 [Thread-0] [39mDEBUG [0;39m [1;35morg.apache.hadoop.ipc.Client
> [0;39m - Stopping client
> 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: closed
> 09:42:33.589 [IPC Client (47) connection to slc05muw.us.**.com/
> 10.241.3.35:41118 from mingtzha] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Client [0;39m - IPC Client (47) connection to
> slc05muw.us.**.com/10.241.3.35:41118 from mingtzha: stopped, remaining
> connections 0
> 09:42:33.589 [pool-1-thread-1] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.ipc.Server [0;39m - IPC Server listener on 41118:
> disconnecting client 10.241.3.35:24701. Number of active connections: 1
> 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Removing filesystem for
> hdfs://slc05muw.us.**.com:41118
> 09:42:33.689 [Thread-0] [39mDEBUG [0;39m
> [1;35morg.apache.hadoop.fs.FileSystem [0;39m - Done clearing cache
>
> The java code:
>
> import java.io.BufferedReader;
> import java.io.InputStreamReader;
>
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HBaseTestingUtility;
> import org.apache.hadoop.hbase.client.Delete;
> import org.apache.hadoop.hbase.client.Get;
> import org.apache.hadoop.hbase.client.HTable;
> import org.apache.hadoop.hbase.client.Put;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
> import org.testng.annotations.AfterMethod;
> import org.testng.annotations.BeforeMethod;
> import org.testng.annotations.Test;
>
> public class HBaseTestSample {
>
>     private static HBaseTestingUtility utility;
>     byte[] CF = "CF".getBytes();
>     byte[] QUALIFIER = "CQ-1".getBytes();
>
>     @BeforeMethod
>     public void setup() throws Exception {
>         Configuration hbaseConf = HBaseConfiguration.create();
>
>         utility = new HBaseTestingUtility(hbaseConf);
>
>         Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
>         BufferedReader br = new BufferedReader(new InputStreamReader(
>                 process.getInputStream()));
>         int rc = process.waitFor();
>         if (rc == 0) {
>             String umask = br.readLine();
>
>             int umaskBits = Integer.parseInt(umask, 8);
>             int permBits = 0777 & ~umaskBits;
>             String perms = Integer.toString(permBits, 8);
>
>             utility.getConfiguration().set("dfs.datanode.data.dir.perm",
> perms);
>         }
>
>         utility.startMiniCluster(0);
>
>     }
>
>     @Test
>     public void testInsert() throws Exception {
>         HTable table = utility.createTable(CF, QUALIFIER);
>
>         System.out.println("create table t-f");
>
>         // byte [] family, byte [] qualifier, byte [] value
>         table.put(new Put("r".getBytes()).add("f".getBytes(),
> "c1".getBytes(),
>                 "v".getBytes()));
>         Result result = table.get(new Get("r".getBytes()));
>
>         System.out.println(result.list().size());
>
>         table.delete(new Delete("r".getBytes()));
>
>         System.out.println("clean up");
>
>     }
>
>     @AfterMethod
>     public void destroy() throws Exception {
>         utility.cleanupTestDir();
>     }
> }
>
> hbase-site.xml:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration>
>     <property>
>         <name>hbase.rootdir</name>
>         <value>file:///scratch/mingtzha/hbase/test</value>
>     </property>
>     <property>
>         <name>hbase.tmp.dir</name>
>         <value>/tmp/hbase</value>
>     </property>
>
>     <property>
>         <name>hbase.zookeeper.quorum</name>
>         <value>localhost</value>
>     </property>
>     <property>
>         <name>hbase.cluster.distributed</name>
>         <value>true</value>
>     </property>
>     <property>
>         <name>hbase.ipc.warn.response.time</name>
>         <value>1</value>
>     </property>
>
>     <!-- http://hbase.apache.org/book/ops.monitoring.html -->
>     <!-- -1 => Disable logging by size -->
>     <!-- <property> -->
>     <!-- <name>hbase.ipc.warn.response.size</name> -->
>     <!-- <value>-1</value> -->
>     <!-- </property> -->
> </configuration>
>
> pom.xml
>
> <?xml version="1.0" encoding="UTF-8"?>
> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="
> http://www.w3.org/2001/XMLSchema-instance"
>     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
> http://maven.apache.org/xsd/maven-4.0.0.xsd">
>     <modelVersion>4.0.0</modelVersion>
>     <parent>
>         <groupId>com.**.sites.analytics.tests</groupId>
>         <artifactId>integration-test</artifactId>
>         <version>1.0-SNAPSHOT</version>
>     </parent>
>
>     <artifactId>repository-itest</artifactId>
>     <name>repository-itest</name>
>
>     <dependencies>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>test-integ</artifactId>
>             <version>${project.version}</version>
>             <scope>test</scope>
>         </dependency>
>         <dependency>
>             <groupId>com.**.sites.analytics.tests</groupId>
>             <artifactId>itest-core</artifactId>
>             <version>${project.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>config-dev</artifactId>
>             <version>${project.version}</version>
>             <scope>test</scope>
>         </dependency>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>repository-core</artifactId>
>             <version>${project.version}</version>
>         </dependency>
>
>         <dependency>
>             <groupId>com.**.sites.analytics</groupId>
>             <artifactId>repository-hbase</artifactId>
>             <version>${project.version}</version>
>         </dependency>
>
>         <dependency>
>             <groupId>org.apache.hbase</groupId>
>             <artifactId>hbase</artifactId>
>             <version>0.94.21</version>
>             <classifier>tests</classifier>
>             <exclusions>
>                 <exclusion>
>                     <artifactId>slf4j-log4j12</artifactId>
>                     <groupId>org.slf4j</groupId>
>                 </exclusion>
>             </exclusions>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.hadoop</groupId>
>             <artifactId>hadoop-core</artifactId>
>             <version>1.2.1</version>
>             <exclusions>
>                 <exclusion>
>                     <artifactId>slf4j-log4j12</artifactId>
>                     <groupId>org.slf4j</groupId>
>                 </exclusion>
>             </exclusions>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.hadoop</groupId>
>             <artifactId>hadoop-test</artifactId>
>             <version>1.2.1</version>
>             <exclusions>
>                 <exclusion>
>                     <artifactId>slf4j-log4j12</artifactId>
>                     <groupId>org.slf4j</groupId>
>                 </exclusion>
>             </exclusions>
>         </dependency>
>     </dependencies>
> </project>
>