You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Keys Botzum <kb...@maprtech.com> on 2012/04/10 20:08:18 UTC

Re: Accumulo on MapR Continued - LargeRowTest

At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest

When I run it, this is the output I see:
./run.py -t largerowtest -d -v10
….
09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
DEBUG:test.auto:{
'tserver.compaction.major.delay':'1',
}

DEBUG:test.auto:
INFO:test.auto:killing accumulo processes everywhere
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
DEBUG:test.auto:Exit code: 255
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
Instance name : SE-test-04-22187
Enter initial password for root: ******
Confirm initial password for root: ******
10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
DEBUG:test.auto:Exit code: 0
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
DEBUG:test.auto:Exit code: 0
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
DEBUG:test.auto:
DEBUG:test.auto:
DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
DEBUG:test.auto:err: 	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.accumulo.start.Main$1.run(Main.java:89)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
DEBUG:test.auto:err: 
	at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
	at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
	at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
	at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
	at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
	at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
	... 6 more
DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
	at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
	at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
	at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
	... 11 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
	at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
	at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
	at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
	at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
DEBUG:test.auto:err: 
	at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
	at $Proxy1.startScan(Unknown Source)
	at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
	at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
	... 13 more
ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.accumulo.start.Main$1.run(Main.java:89)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
	at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
	at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
	at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
	at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
	at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
	at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
	... 6 more
Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
	at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
	at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
	at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
	... 11 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
	at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
	at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
	at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
	at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
	at $Proxy1.startScan(Unknown Source)
	at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
	at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
	... 13 more

FAIL
======================================================================
FAIL: runTest (simple.largeRow.LargeRowTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
    self.waitForStop(handle, self.maxRuntime)
  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
    self.assert_(self.processResult(out, err, handle.returncode))
AssertionError: False is not true


======================================================================
FAIL: runTest (simple.largeRow.LargeRowTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
    self.waitForStop(handle, self.maxRuntime)
  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
    self.assert_(self.processResult(out, err, handle.returncode))
AssertionError: False is not true

----------------------------------------------------------------------
Ran 1 test in 43.014s

FAILED (failures=1)


The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run. 


09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
java.io.IOException: invalid distance too far back
        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
        at java.io.FilterInputStream.read(FilterInputStream.java:66)
        at java.io.DataInputStream.readByte(DataInputStream.java:248)
        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
        at $Proxy0.startScan(Unknown Source)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
        ... 15 more
Caused by: java.io.IOException: invalid distance too far back
        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
        at java.io.FilterInputStream.read(FilterInputStream.java:66)
        at java.io.DataInputStream.readByte(DataInputStream.java:248)
        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        ... 1 more


After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.

When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:

10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
java.io.EOFException
        at java.io.DataInputStream.readFully(DataInputStream.java:180)
        at java.io.DataInputStream.readFully(DataInputStream.java:152)
        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
        at $Proxy0.startScan(Unknown Source)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
        ... 15 more
Caused by: java.io.EOFException
        at java.io.DataInputStream.readFully(DataInputStream.java:180)
        at java.io.DataInputStream.readFully(DataInputStream.java:152)
        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
        at $Proxy0.startScan(Unknown Source)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
        ... 15 more
Caused by: java.io.EOFException
        at java.io.DataInputStream.readFully(DataInputStream.java:180)
        at java.io.DataInputStream.readFully(DataInputStream.java:152)
        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        ... 1 more


So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas: 
1) the file was originally written incorrectly by the writer, 
2) the reader is reading too far

This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.

If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.

Thanks,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
Comments inline below

On Wed, Apr 11, 2012 at 9:48 AM, Keys Botzum <kb...@maprtech.com> wrote:
> Keith,
>
> Thanks for the suggestion.  I made the change to the source as you suggested and rebuilt it using Maven (surprisingly easy).
>
> Here's the log from tserver now. Does this help at all? I can of course provide the complete log or logs if useful to you. I can also provide the temporary tables and such if that's useful.
>
>
> 10 15:44:07,786 [cache.LruBlockCache] DEBUG: Block cache LRU eviction completed. Freed 2494168 bytes.  Priority Sizes: Single=3.2550507MB (3413168), Multi=1
> 3.89547MB (14570456),Memory=0.0MB (0)
> 10 15:44:07,798 [rfile.RelativeKey] DEBUG: len : 131072
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s :OKl:2"cp>]yT(ZeP
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :

This looks like the data I would expect for this test, it reads a 128k
row, then a 0 len colfam, 0 len colqual, and 0 len colvis.

> 10 15:44:07,799 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:52572 2 1 entries in 0.03 secs, nbTimes = [25 25 25.00 1]
> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 65
> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.tP"RsUOI
> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 47

The reads above do not look right, the data looks like the data that
would be generated for the row.  However, the length is not right. Its
65 bytes instead of 128k.  Now it may not be trying to read the row,
there is not enough information here to be sure.  But its definitely
trying to read two fields that look rows in the test data, and the
lengths are not whats expected.  So the read is off, I suspect its not
starting read the key in the right place for some reason.  I am going
to look around in the code and see where the best places to add some
more debug might be.

> 10 15:44:07,833 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-SE-test-04-18004/tables/2/t-0000000/F000000p.rf
> 10 15:44:07,834 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... T
> RUNCATED<
> java.io.EOFException
>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:381)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:135)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
>
>
> It appears to be attempting to read 47 bytes but isn't succeeding. Out of curiosity I changed the code to read what it could and print a warning. Here's the new code version:
>
>
> And this is a snippet of the exception which occurs with that change. Everything else is the same. As you can see my hack gets us past the read of the key, but then the next read fails.
>
> 11 06:42:32,254 [rfile.RelativeKey] DEBUG: data :
> 11 06:42:32,254 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:47993 2 1 entries in 0.02
>  secs, nbTimes = [23 23 23.00 1]
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 65
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.t
> P"RsUOI
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 47
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: MISSING BYTES!!: read 45
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : )vRS>4 ?c>$Sgn#[QcscA!HAYcF;M_Jg3d&Jzc85$)6Y7^@^@
> 11 06:42:32,288 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-
> SE-test-04-32318/tables/2/t-0000000/F000000q.rf
> 11 06:42:32,289 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\
> ;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
> java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.accumulo.core.data.Value.readFields(Value.java:156)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:585)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.j
>       …..
>
> So it looks like we are missing quite a bit of data.
>
> Any help or ideas appreciated.
>
> Thanks,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 10, 2012, at 5:23 PM, Keith Turner wrote:
>
>> Keys,
>>
>> Looking at the test, it writes out random rows that 128k in len.  The
>> column family and column qualifier it writes out are 0 bytes long.
>> When the non compression test failed, it was trying to read a column
>> qualifier.  If we assume that it was reading a column qualifier from
>> the test table then it should be calling readFully() with a zero
>> length array.
>>
>> Trying to think how to debug this.  One way may be to change the code
>> in RelativeKey to the following and run the test.  This will show us
>> what its trying to do right before it hits the eof, but it will also
>> generate a lot of noise as things scan the metadata table.
>>
>>  private byte[] read(DataInput in) throws IOException {
>>    int len = WritableUtils.readVInt(in);
>>    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
>>    byte[] data = new byte[len];
>>    in.readFully(data);
>>    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
>> new String(data).substring(0, Math.min(data.length, 60)));
>>    return data;
>>  }
>>
>> Keith
>>
>> On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kb...@maprtech.com> wrote:
>>> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>>>
>>> When I run it, this is the output I see:
>>> ./run.py -t largerowtest -d -v10
>>> ….
>>> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
>>> DEBUG:test.auto:{
>>> 'tserver.compaction.major.delay':'1',
>>> }
>>>
>>> DEBUG:test.auto:
>>> INFO:test.auto:killing accumulo processes everywhere
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
>>> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
>>> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
>>> DEBUG:test.auto:Exit code: 255
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
>>> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
>>> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
>>> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
>>> Instance name : SE-test-04-22187
>>> Enter initial password for root: ******
>>> Confirm initial password for root: ******
>>> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>>> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
>>> DEBUG:test.auto:Exit code: 0
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
>>> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
>>> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
>>> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
>>> DEBUG:test.auto:Exit code: 0
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
>>> DEBUG:test.auto:
>>> DEBUG:test.auto:
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
>>> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
>>> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>> DEBUG:test.auto:err:
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>        ... 6 more
>>> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>        ... 11 more
>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>> DEBUG:test.auto:err:
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>        at $Proxy1.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>        ... 13 more
>>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>> java.lang.reflect.InvocationTargetException
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>        ... 6 more
>>> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>        ... 11 more
>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>        at $Proxy1.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>        ... 13 more
>>>
>>> FAIL
>>> ======================================================================
>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>> ----------------------------------------------------------------------
>>> Traceback (most recent call last):
>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>    self.waitForStop(handle, self.maxRuntime)
>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>> AssertionError: False is not true
>>>
>>>
>>> ======================================================================
>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>> ----------------------------------------------------------------------
>>> Traceback (most recent call last):
>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>    self.waitForStop(handle, self.maxRuntime)
>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>> AssertionError: False is not true
>>>
>>> ----------------------------------------------------------------------
>>> Ran 1 test in 43.014s
>>>
>>> FAILED (failures=1)
>>>
>>>
>>> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
>>> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>>>
>>>
>>> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
>>> java.io.IOException: invalid distance too far back
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>        at $Proxy0.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>        ... 15 more
>>> Caused by: java.io.IOException: invalid distance too far back
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        ... 1 more
>>>
>>>
>>> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>>>
>>> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>>>
>>> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
>>> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
>>> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>> java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>        at $Proxy0.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>        ... 15 more
>>> Caused by: java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>        at $Proxy0.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>        ... 15 more
>>> Caused by: java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        ... 1 more
>>>
>>>
>>> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
>>> 1) the file was originally written incorrectly by the writer,
>>> 2) the reader is reading too far
>>>
>>> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>>>
>>> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>>>
>>> Thanks,
>>> Keys
>>> ________________________________
>>> Keys Botzum
>>> Senior Principal Technologist
>>> WW Systems Engineering
>>> kbotzum@maprtech.com
>>> 443-718-0098
>>> MapR Technologies
>>> http://www.mapr.com
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Eric Newton <er...@gmail.com>.
The latelastcontact test was recently re-written, so it will work in the
next release.

You could configure and run the continuous ingest and random walk tests.
 We run them for 24 hours, with agitation (random server kill) and without
prior to a release.

These are more likely to stress the underlying filesystem.

-Eric

On Wed, Apr 18, 2012 at 1:17 PM, Keys Botzum <kb...@maprtech.com> wrote:

> Thanks to the help of Keith, Todd, and Eric, as well as MapR engineering,
> all of the Accumulo tests is test/system/auto are now passing. Note that
> the latelastcontact test only passes if I actually install zookeeper on the
> host. This is because of the dependency on zkCli.sh that I mentioned
> earlier.
>
> The final piece of the puzzle was that MapR does aggressive read ahead
> caching of data as well as aggregation of writes to improve performance. As
> with Hbase, we don't think this type of behavior is helpful with something
> like Accumulo. In our specific case, the interaction between Accumulo and
> MapR's behavior results in the large row test failing.
>
> So now I have one more question. To disable the caching and aggregation
> behavior, we need to set these properties:
> <property>
> <name>fs.mapr.readbuffering</name>
> <value>false</value>
> </property>
>
> <property>
> <name>fs.mapr.aggregate.writes</name>
> <value>false</value>
> </property>
>
> If I set them in core-site.xml they of course work but that's a global
> setting. I want to only affect Accumulo. If I set them in
> accumulo-site.xml, I presume they take effect for normal Accumulo usage,
> but I'm nearly certain that settings in accumulo-site.xml do not affect the
> tests as I posted earlier. How can I set those two properties in a way that
> will cause the tests temporary configuration to take them into account? I
> tried editing TestUtils.py TestUtilsMixin settings as did work for the
> Accumulo property table.file.compress.type, but the MapR related properties
> don't seem to take. Ideas?
>
> Also, if all of the auto tests pass successfully do you feel comfortable
> that the testing was sufficient or do you recommend running additional
> tests?
>
> Thanks!
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>

Re: Accumulo on MapR Continued

Posted by Keys Botzum <kb...@maprtech.com>.
Eric,

Clever. I'll add that to the doc as an option.

Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 20, 2012, at 9:44 AM, Eric Newton wrote:

> You should be able to adjust the classpath in conf/accumulo-site.xml, and remove $HADOOP_HOME/conf and just put the updated core-site.xml in the accumulo/conf directory.
> 
> -Eric
> 
> On Fri, Apr 20, 2012 at 9:35 AM, Keys Botzum <kb...@maprtech.com> wrote:
> Keith,
> 
> I was able to get Accumulo to use a Accumulo specific configuration of Hadoop. It was a bit of a hack. Basically I created a fake Hadoop installation tree that is almost entirely symbolic links to the real tree under /opt/mapr/hadoop. The only real file in the tree is core-site.xml where I set the two properties. The essential steps where:
> 	cd /opt/accumulo
> 	mkdir hadoop
> 	mkdir hadoop/hadoop-0.20.2
> 	cd  hadoop/hadoop-0.20.2
> 	ln -s /opt/mapr/hadoop/hadoop-0.20.2/* .
> 	rm conf
> 	mkdir conf
> 	cd conf
> 	ln -s /opt/mapr/hadoop/hadoop-0.20.2/conf/*
> 	cp core-site.xml t
> 	mv t core-site.xml
> 	edit core-site.xml as needed
> 
> Then I set the HADOOP_HOME in accumulo-env.sh to that directory and everything worked fine.
> 
> By the way, I tried setting HADOOP_CONF_DIR and that had no effect.
> 
> Since I plan to document these steps, I want to make sure I understood your intent and that I haven't missed something. Typically in Hadoop components the ultimate configuration is a combination of each components *-site.xml file. As a result I can set things in, for example, hbase-site.xml that are really Hadoop properties. Assuming I understood what you and Eric were saying, this is not true in Accumulo. That's fine by me, but I just want to make sure I'm not saying things that aren't true.
> 
> Thanks again for all of your help,
> Keys
> 
> p.s. I'm running the random and ingest tests you and Eric suggested as we speak. The random test completed successfully.
> 
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
> 
> 


Re: Accumulo on MapR Continued

Posted by Eric Newton <er...@gmail.com>.
You should be able to adjust the classpath in conf/accumulo-site.xml, and
remove $HADOOP_HOME/conf and just put the updated core-site.xml in the
accumulo/conf directory.

-Eric

On Fri, Apr 20, 2012 at 9:35 AM, Keys Botzum <kb...@maprtech.com> wrote:

> Keith,
>
> I was able to get Accumulo to use a Accumulo specific configuration of
> Hadoop. It was a bit of a hack. Basically I created a fake Hadoop
> installation tree that is almost entirely symbolic links to the real tree
> under /opt/mapr/hadoop. The only real file in the tree is core-site.xml
> where I set the two properties. The essential steps where:
> cd /opt/accumulo
> mkdir hadoop
> mkdir hadoop/hadoop-0.20.2
> cd  hadoop/hadoop-0.20.2
> ln -s /opt/mapr/hadoop/hadoop-0.20.2/* .
> rm conf
> mkdir conf
> cd conf
> ln -s /opt/mapr/hadoop/hadoop-0.20.2/conf/*
> cp core-site.xml t
> mv t core-site.xml
> edit core-site.xml as needed
>
> Then I set the HADOOP_HOME in accumulo-env.sh to that directory and
> everything worked fine.
>
> By the way, I tried setting HADOOP_CONF_DIR and that had no effect.
>
> Since I plan to document these steps, I want to make sure I understood
> your intent and that I haven't missed something. Typically in Hadoop
> components the ultimate configuration is a combination of each components
> *-site.xml file. As a result I can set things in, for example,
> hbase-site.xml that are really Hadoop properties. Assuming I understood
> what you and Eric were saying, this is not true in Accumulo. That's fine by
> me, but I just want to make sure I'm not saying things that aren't true.
>
> Thanks again for all of your help,
> Keys
>
> p.s. I'm running the random and ingest tests you and Eric suggested as we
> speak. The random test completed successfully.
>
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 18, 2012, at 3:11 PM, Keith Turner wrote:
>
> I suppose accumulo could be pointed to a different hadoop config dir.
>
> On Wed, Apr 18, 2012 at 1:58 PM, Keys Botzum <kb...@maprtech.com> wrote:
>
> Eric and Keith,
>
>
> I will attempt the additional tests you have suggested.
>
>
> Any ideas on what to do regarding those configuration properties? With
> hbase
>
> in hbase-site.xml, we set those properties and they work fine. Is there
> some
>
> incantation I'm missing here? I really don't want those properties to be
>
> global as they will negatively impact performance and are only relevant to
>
> hbase and Accumulo.
>
>
> Thanks,
>
> Keys
>
> ________________________________
>
> Keys Botzum
>
> Senior Principal Technologist
>
> WW Systems Engineering
>
> kbotzum@maprtech.com
>
> 443-718-0098
>
> MapR Technologies
>
> http://www.mapr.com
>
>
>
>
> On Apr 18, 2012, at 1:42 PM, Keith Turner wrote:
>
>
> Settings in accumulo-site.xml do not end up in the hadoop config
>
> object, so setting them will probably have no effect.
>
>
> I would suggest running continuous ingest test and random walk test if
>
> you really want to stress it.  These are the test we use prior to an
>
> accumulo release.  You would need to exclude the random walk security
>
> test, it triggers known bug in 1.4 that are not fixed.
>
>
> Running the test on a cluster overnight would be good.
>
>
> Keith
>
>
> On Wed, Apr 18, 2012 at 1:17 PM, Keys Botzum <kb...@maprtech.com> wrote:
>
>
> Thanks to the help of Keith, Todd, and Eric, as well as MapR engineering,
>
> all of the Accumulo tests is test/system/auto are now passing. Note that
> the
>
> latelastcontact test only passes if I actually install zookeeper on the
>
> host. This is because of the dependency on zkCli.sh that I mentioned
>
> earlier.
>
>
>
> The final piece of the puzzle was that MapR does aggressive read ahead
>
> caching of data as well as aggregation of writes to improve performance. As
>
> with Hbase, we don't think this type of behavior is helpful with something
>
> like Accumulo. In our specific case, the interaction between Accumulo and
>
> MapR's behavior results in the large row test failing.
>
>
>
> So now I have one more question. To disable the caching and aggregation
>
> behavior, we need to set these properties:
>
>
> <property>
>
>
> <name>fs.mapr.readbuffering</name>
>
>
> <value>false</value>
>
>
> </property>
>
>
>
> <property>
>
>
> <name>fs.mapr.aggregate.writes</name>
>
>
> <value>false</value>
>
>
> </property>
>
>
>
> If I set them in core-site.xml they of course work but that's a global
>
> setting. I want to only affect Accumulo. If I set them in
> accumulo-site.xml,
>
> I presume they take effect for normal Accumulo usage, but I'm nearly
> certain
>
> that settings in accumulo-site.xml do not affect the tests as I posted
>
> earlier. How can I set those two properties in a way that will cause the
>
> tests temporary configuration to take them into account? I tried editing
>
> TestUtils.py TestUtilsMixin settings as did work for the Accumulo property
>
> table.file.compress.type, but the MapR related properties don't seem to
>
> take. Ideas?
>
>
>
> Also, if all of the auto tests pass successfully do you feel comfortable
>
> that the testing was sufficient or do you recommend running additional
>
> tests?
>
>
>
> Thanks!
>
>
> Keys
>
>
> ________________________________
>
>
> Keys Botzum
>
>
> Senior Principal Technologist
>
>
> WW Systems Engineering
>
>
> kbotzum@maprtech.com
>
>
> 443-718-0098
>
>
> MapR Technologies
>
>
> http://www.mapr.com
>
>
>
>
>
>

Re: Accumulo on MapR Continued

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,

I was able to get Accumulo to use a Accumulo specific configuration of Hadoop. It was a bit of a hack. Basically I created a fake Hadoop installation tree that is almost entirely symbolic links to the real tree under /opt/mapr/hadoop. The only real file in the tree is core-site.xml where I set the two properties. The essential steps where:
	cd /opt/accumulo
	mkdir hadoop
	mkdir hadoop/hadoop-0.20.2
	cd  hadoop/hadoop-0.20.2
	ln -s /opt/mapr/hadoop/hadoop-0.20.2/* .
	rm conf
	mkdir conf
	cd conf
	ln -s /opt/mapr/hadoop/hadoop-0.20.2/conf/*
	cp core-site.xml t
	mv t core-site.xml
	edit core-site.xml as needed

Then I set the HADOOP_HOME in accumulo-env.sh to that directory and everything worked fine.

By the way, I tried setting HADOOP_CONF_DIR and that had no effect.

Since I plan to document these steps, I want to make sure I understood your intent and that I haven't missed something. Typically in Hadoop components the ultimate configuration is a combination of each components *-site.xml file. As a result I can set things in, for example, hbase-site.xml that are really Hadoop properties. Assuming I understood what you and Eric were saying, this is not true in Accumulo. That's fine by me, but I just want to make sure I'm not saying things that aren't true.

Thanks again for all of your help,
Keys

p.s. I'm running the random and ingest tests you and Eric suggested as we speak. The random test completed successfully.
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 18, 2012, at 3:11 PM, Keith Turner wrote:

> I suppose accumulo could be pointed to a different hadoop config dir.
> 
> On Wed, Apr 18, 2012 at 1:58 PM, Keys Botzum <kb...@maprtech.com> wrote:
>> Eric and Keith,
>> 
>> I will attempt the additional tests you have suggested.
>> 
>> Any ideas on what to do regarding those configuration properties? With hbase
>> in hbase-site.xml, we set those properties and they work fine. Is there some
>> incantation I'm missing here? I really don't want those properties to be
>> global as they will negatively impact performance and are only relevant to
>> hbase and Accumulo.
>> 
>> Thanks,
>> Keys
>> ________________________________
>> Keys Botzum
>> Senior Principal Technologist
>> WW Systems Engineering
>> kbotzum@maprtech.com
>> 443-718-0098
>> MapR Technologies
>> http://www.mapr.com
>> 
>> 
>> 
>> On Apr 18, 2012, at 1:42 PM, Keith Turner wrote:
>> 
>> Settings in accumulo-site.xml do not end up in the hadoop config
>> object, so setting them will probably have no effect.
>> 
>> I would suggest running continuous ingest test and random walk test if
>> you really want to stress it.  These are the test we use prior to an
>> accumulo release.  You would need to exclude the random walk security
>> test, it triggers known bug in 1.4 that are not fixed.
>> 
>> Running the test on a cluster overnight would be good.
>> 
>> Keith
>> 
>> On Wed, Apr 18, 2012 at 1:17 PM, Keys Botzum <kb...@maprtech.com> wrote:
>> 
>> Thanks to the help of Keith, Todd, and Eric, as well as MapR engineering,
>> all of the Accumulo tests is test/system/auto are now passing. Note that the
>> latelastcontact test only passes if I actually install zookeeper on the
>> host. This is because of the dependency on zkCli.sh that I mentioned
>> earlier.
>> 
>> 
>> The final piece of the puzzle was that MapR does aggressive read ahead
>> caching of data as well as aggregation of writes to improve performance. As
>> with Hbase, we don't think this type of behavior is helpful with something
>> like Accumulo. In our specific case, the interaction between Accumulo and
>> MapR's behavior results in the large row test failing.
>> 
>> 
>> So now I have one more question. To disable the caching and aggregation
>> behavior, we need to set these properties:
>> 
>> <property>
>> 
>> <name>fs.mapr.readbuffering</name>
>> 
>> <value>false</value>
>> 
>> </property>
>> 
>> 
>> <property>
>> 
>> <name>fs.mapr.aggregate.writes</name>
>> 
>> <value>false</value>
>> 
>> </property>
>> 
>> 
>> If I set them in core-site.xml they of course work but that's a global
>> setting. I want to only affect Accumulo. If I set them in accumulo-site.xml,
>> I presume they take effect for normal Accumulo usage, but I'm nearly certain
>> that settings in accumulo-site.xml do not affect the tests as I posted
>> earlier. How can I set those two properties in a way that will cause the
>> tests temporary configuration to take them into account? I tried editing
>> TestUtils.py TestUtilsMixin settings as did work for the Accumulo property
>> table.file.compress.type, but the MapR related properties don't seem to
>> take. Ideas?
>> 
>> 
>> Also, if all of the auto tests pass successfully do you feel comfortable
>> that the testing was sufficient or do you recommend running additional
>> tests?
>> 
>> 
>> Thanks!
>> 
>> Keys
>> 
>> ________________________________
>> 
>> Keys Botzum
>> 
>> Senior Principal Technologist
>> 
>> WW Systems Engineering
>> 
>> kbotzum@maprtech.com
>> 
>> 443-718-0098
>> 
>> MapR Technologies
>> 
>> http://www.mapr.com
>> 
>> 
>> 


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
I suppose accumulo could be pointed to a different hadoop config dir.

On Wed, Apr 18, 2012 at 1:58 PM, Keys Botzum <kb...@maprtech.com> wrote:
> Eric and Keith,
>
> I will attempt the additional tests you have suggested.
>
> Any ideas on what to do regarding those configuration properties? With hbase
> in hbase-site.xml, we set those properties and they work fine. Is there some
> incantation I'm missing here? I really don't want those properties to be
> global as they will negatively impact performance and are only relevant to
> hbase and Accumulo.
>
> Thanks,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 18, 2012, at 1:42 PM, Keith Turner wrote:
>
> Settings in accumulo-site.xml do not end up in the hadoop config
> object, so setting them will probably have no effect.
>
> I would suggest running continuous ingest test and random walk test if
> you really want to stress it.  These are the test we use prior to an
> accumulo release.  You would need to exclude the random walk security
> test, it triggers known bug in 1.4 that are not fixed.
>
> Running the test on a cluster overnight would be good.
>
> Keith
>
> On Wed, Apr 18, 2012 at 1:17 PM, Keys Botzum <kb...@maprtech.com> wrote:
>
> Thanks to the help of Keith, Todd, and Eric, as well as MapR engineering,
> all of the Accumulo tests is test/system/auto are now passing. Note that the
> latelastcontact test only passes if I actually install zookeeper on the
> host. This is because of the dependency on zkCli.sh that I mentioned
> earlier.
>
>
> The final piece of the puzzle was that MapR does aggressive read ahead
> caching of data as well as aggregation of writes to improve performance. As
> with Hbase, we don't think this type of behavior is helpful with something
> like Accumulo. In our specific case, the interaction between Accumulo and
> MapR's behavior results in the large row test failing.
>
>
> So now I have one more question. To disable the caching and aggregation
> behavior, we need to set these properties:
>
> <property>
>
> <name>fs.mapr.readbuffering</name>
>
> <value>false</value>
>
> </property>
>
>
> <property>
>
> <name>fs.mapr.aggregate.writes</name>
>
> <value>false</value>
>
> </property>
>
>
> If I set them in core-site.xml they of course work but that's a global
> setting. I want to only affect Accumulo. If I set them in accumulo-site.xml,
> I presume they take effect for normal Accumulo usage, but I'm nearly certain
> that settings in accumulo-site.xml do not affect the tests as I posted
> earlier. How can I set those two properties in a way that will cause the
> tests temporary configuration to take them into account? I tried editing
> TestUtils.py TestUtilsMixin settings as did work for the Accumulo property
> table.file.compress.type, but the MapR related properties don't seem to
> take. Ideas?
>
>
> Also, if all of the auto tests pass successfully do you feel comfortable
> that the testing was sufficient or do you recommend running additional
> tests?
>
>
> Thanks!
>
> Keys
>
> ________________________________
>
> Keys Botzum
>
> Senior Principal Technologist
>
> WW Systems Engineering
>
> kbotzum@maprtech.com
>
> 443-718-0098
>
> MapR Technologies
>
> http://www.mapr.com
>
>
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Eric and Keith,

I will attempt the additional tests you have suggested.

Any ideas on what to do regarding those configuration properties? With hbase in hbase-site.xml, we set those properties and they work fine. Is there some incantation I'm missing here? I really don't want those properties to be global as they will negatively impact performance and are only relevant to hbase and Accumulo.

Thanks,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 18, 2012, at 1:42 PM, Keith Turner wrote:

> Settings in accumulo-site.xml do not end up in the hadoop config
> object, so setting them will probably have no effect.
> 
> I would suggest running continuous ingest test and random walk test if
> you really want to stress it.  These are the test we use prior to an
> accumulo release.  You would need to exclude the random walk security
> test, it triggers known bug in 1.4 that are not fixed.
> 
> Running the test on a cluster overnight would be good.
> 
> Keith
> 
> On Wed, Apr 18, 2012 at 1:17 PM, Keys Botzum <kb...@maprtech.com> wrote:
>> Thanks to the help of Keith, Todd, and Eric, as well as MapR engineering, all of the Accumulo tests is test/system/auto are now passing. Note that the latelastcontact test only passes if I actually install zookeeper on the host. This is because of the dependency on zkCli.sh that I mentioned earlier.
>> 
>> The final piece of the puzzle was that MapR does aggressive read ahead caching of data as well as aggregation of writes to improve performance. As with Hbase, we don't think this type of behavior is helpful with something like Accumulo. In our specific case, the interaction between Accumulo and MapR's behavior results in the large row test failing.
>> 
>> So now I have one more question. To disable the caching and aggregation behavior, we need to set these properties:
>> <property>
>> <name>fs.mapr.readbuffering</name>
>> <value>false</value>
>> </property>
>> 
>> <property>
>> <name>fs.mapr.aggregate.writes</name>
>> <value>false</value>
>> </property>
>> 
>> If I set them in core-site.xml they of course work but that's a global setting. I want to only affect Accumulo. If I set them in accumulo-site.xml, I presume they take effect for normal Accumulo usage, but I'm nearly certain that settings in accumulo-site.xml do not affect the tests as I posted earlier. How can I set those two properties in a way that will cause the tests temporary configuration to take them into account? I tried editing TestUtils.py TestUtilsMixin settings as did work for the Accumulo property table.file.compress.type, but the MapR related properties don't seem to take. Ideas?
>> 
>> Also, if all of the auto tests pass successfully do you feel comfortable that the testing was sufficient or do you recommend running additional tests?
>> 
>> Thanks!
>> Keys
>> ________________________________
>> Keys Botzum
>> Senior Principal Technologist
>> WW Systems Engineering
>> kbotzum@maprtech.com
>> 443-718-0098
>> MapR Technologies
>> http://www.mapr.com
>> 


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
Settings in accumulo-site.xml do not end up in the hadoop config
object, so setting them will probably have no effect.

I would suggest running continuous ingest test and random walk test if
you really want to stress it.  These are the test we use prior to an
accumulo release.  You would need to exclude the random walk security
test, it triggers known bug in 1.4 that are not fixed.

Running the test on a cluster overnight would be good.

Keith

On Wed, Apr 18, 2012 at 1:17 PM, Keys Botzum <kb...@maprtech.com> wrote:
> Thanks to the help of Keith, Todd, and Eric, as well as MapR engineering, all of the Accumulo tests is test/system/auto are now passing. Note that the latelastcontact test only passes if I actually install zookeeper on the host. This is because of the dependency on zkCli.sh that I mentioned earlier.
>
> The final piece of the puzzle was that MapR does aggressive read ahead caching of data as well as aggregation of writes to improve performance. As with Hbase, we don't think this type of behavior is helpful with something like Accumulo. In our specific case, the interaction between Accumulo and MapR's behavior results in the large row test failing.
>
> So now I have one more question. To disable the caching and aggregation behavior, we need to set these properties:
> <property>
> <name>fs.mapr.readbuffering</name>
> <value>false</value>
> </property>
>
> <property>
> <name>fs.mapr.aggregate.writes</name>
> <value>false</value>
> </property>
>
> If I set them in core-site.xml they of course work but that's a global setting. I want to only affect Accumulo. If I set them in accumulo-site.xml, I presume they take effect for normal Accumulo usage, but I'm nearly certain that settings in accumulo-site.xml do not affect the tests as I posted earlier. How can I set those two properties in a way that will cause the tests temporary configuration to take them into account? I tried editing TestUtils.py TestUtilsMixin settings as did work for the Accumulo property table.file.compress.type, but the MapR related properties don't seem to take. Ideas?
>
> Also, if all of the auto tests pass successfully do you feel comfortable that the testing was sufficient or do you recommend running additional tests?
>
> Thanks!
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Thanks to the help of Keith, Todd, and Eric, as well as MapR engineering, all of the Accumulo tests is test/system/auto are now passing. Note that the latelastcontact test only passes if I actually install zookeeper on the host. This is because of the dependency on zkCli.sh that I mentioned earlier.

The final piece of the puzzle was that MapR does aggressive read ahead caching of data as well as aggregation of writes to improve performance. As with Hbase, we don't think this type of behavior is helpful with something like Accumulo. In our specific case, the interaction between Accumulo and MapR's behavior results in the large row test failing. 

So now I have one more question. To disable the caching and aggregation behavior, we need to set these properties:
<property>
<name>fs.mapr.readbuffering</name>
<value>false</value>
</property>

<property>
<name>fs.mapr.aggregate.writes</name>
<value>false</value>
</property>

If I set them in core-site.xml they of course work but that's a global setting. I want to only affect Accumulo. If I set them in accumulo-site.xml, I presume they take effect for normal Accumulo usage, but I'm nearly certain that settings in accumulo-site.xml do not affect the tests as I posted earlier. How can I set those two properties in a way that will cause the tests temporary configuration to take them into account? I tried editing TestUtils.py TestUtilsMixin settings as did work for the Accumulo property table.file.compress.type, but the MapR related properties don't seem to take. Ideas?

Also, if all of the auto tests pass successfully do you feel comfortable that the testing was sufficient or do you recommend running additional tests?

Thanks!
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,


With MapR Engineering's assistance we've been able to determine that this is somehow related to MapR buffering of file reads. We'll continue working to figure out where the issue really lies and how to address it.

I'll post a followup here, hopefully soon.

Thanks again for all of your help. It is really appreciated,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com


On Apr 17, 2012, at 6:42 PM, Keith Turner wrote:

> I think it was failing for 84.  It get some data for row 84 from a
> block, but its not sure if the next block contains more data for that
> row (it does not) so it looks and when it does that fails.
> 
> The fact that you can query all rows in the local fs show that the
> file in DFS is sounds.  Also, the fact that you can query the
> individual row in DFS shows the file is sound.  The file is not
> corrupt, but corruption seems to occur under certain circumstances
> when reading the file.
> 
> The rows that are being looked up in the file are not in sorted order.
> So we are reading randomly from the file.  So it seems that seeking
> the file around alot randomly seems to eventually leave something in
> MapR in a bad state.  At this point I feel the rfile code is ok and
> that it seems like the problem is w/ MapR, because it worked in local
> fs and w/ single row in dfs.  What do you think, does this deduction
> sound reasonable?
> 
> There is one thing that is not being printed out, that is when it
> reads the rfile index.  In this case that is there its failing.  The
> following information comes from the index, but we are not printing
> any debug about reading the index blocks.
> 
>  .... DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 .....
> 
> You can see in the stack trace that failed that its was trying to read
> an index block.
> 
>  at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.getIndexBlock(MultiLevelIndex.java:657)
> 
> Some of the previous failures you sent me occurred when it was trying
> to read a data block, not an index block.
> 
> At this point I am not sure what else I can do.  Let me know if you
> have any more questions.
> 
> Keith
> 
> 
> On Tue, Apr 17, 2012 at 6:19 PM, Keys Botzum <kb...@maprtech.com> wrote:
>> Keith,
>> 
>> Great idea. Here are the results.
>> 
>> Run with just record 84 against Hadoop file system:
>> 
>> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 84  /user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
>> 17 12:22:22,428 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
>> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:22:22,685 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:22:22,690 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:22:22,696 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:22:22,696 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:22:22,718 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
>> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: data =
>> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: data =
>> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: data =
>> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
>> 17 12:22:22,720 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:22:22,720 [rfile.RelativeKey] DEBUG: data = 84
>> row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 
>> Notice no exception. When I ran it against the entire file, here's the last bit of output:
>> 
>> 17 12:21:39,605 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:21:39,605 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:21:39,611 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:21:39,611 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:21:39,612 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
>> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: data =
>> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: data =
>> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: data =
>> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
>> 17 12:21:39,614 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:21:39,614 [rfile.RelativeKey] DEBUG: data = 84
>> row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> Thread "org.apache.accumulo.server.test.functional.LargeRowDirectQuery" died null
>> java.lang.reflect.InvocationTargetException
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.io.IOException: invalid distance too far back
>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>        at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$IndexBlock.readFields(MultiLevelIndex.java:256)
>>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.getIndexBlock(MultiLevelIndex.java:657)
>>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.access$100(MultiLevelIndex.java:430)
>>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.lookup(MultiLevelIndex.java:477)
>>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.access$400(MultiLevelIndex.java:436)
>>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.lookup(MultiLevelIndex.java:665)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._seek(RFile.java:700)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.seek(RFile.java:616)
>>        at org.apache.accumulo.core.file.rfile.RFile$Reader.seek(RFile.java:1026)
>>        at org.apache.accumulo.server.test.functional.LargeRowDirectQuery.main(LargeRowDirectQuery.java:96)
>>        ... 6 more
>> 
>> 
>> That seems to imply that the read error is just after row #84.
>> 
>> I then copied the file out of the file system onto a normal linux file system and ran the test requested:
>> 
>> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ cp /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf /tmp
>> 
>> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery file:///tmp/F000000w.rf
>> 17 12:24:02,893 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
>> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:24:03,039 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
>> 17 12:24:03,042 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,043 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,043 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,049 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q:
>> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
>> 17 12:24:03,051 [rfile.RelativeKey] DEBUG: len = 1
>> 17 12:24:03,051 [rfile.RelativeKey] DEBUG: data = 1
>> row # 1 top key : f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
>> 17 12:24:03,055 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
>> 17 12:24:03,056 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,056 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
>> 17 12:24:03,125 [rfile.RFile] DEBUG: Getting block offset=2376219 csize=108006 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
>> 17 12:24:03,126 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,126 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,127 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,127 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 1
>> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data = 9
>> row # 9 top key : bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
>> 17 12:24:03,129 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
>> 17 12:24:03,130 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,130 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
>> 17 12:24:03,160 [rfile.RFile] DEBUG: Getting block offset=1080062 csize=107959 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
>> 17 12:24:03,160 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,161 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,163 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: Read ts 1334170440112
>> 17 12:24:03,165 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,165 [rfile.RelativeKey] DEBUG: data = 12
>> row # 12 top key : _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
>> 17 12:24:03,165 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
>> 17 12:24:03,166 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,166 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
>> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
>> 17 12:24:03,190 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
>> 17 12:24:03,190 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,191 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,191 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
>> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,193 [rfile.RelativeKey] DEBUG: data = 15
>> row # 15 top key : ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
>> 17 12:24:03,193 [rfile.RFile] DEBUG: Getting block offset=539977 csize=108001 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
>> 17 12:24:03,194 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,194 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
>> 17 12:24:03,240 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
>> 17 12:24:03,240 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,241 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,241 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
>> 17 12:24:03,243 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,243 [rfile.RelativeKey] DEBUG: data = 32
>> row # 32 top key : eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
>> 17 12:24:03,243 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
>> 17 12:24:03,244 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,244 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q:
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
>> 17 12:24:03,267 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
>> 17 12:24:03,267 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,268 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,268 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
>> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
>> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: data = 35
>> row # 35 top key : cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
>> 17 12:24:03,271 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
>> 17 12:24:03,271 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,272 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,272 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
>> 17 12:24:03,272 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
>> 17 12:24:03,294 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
>> 17 12:24:03,294 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,295 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,295 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
>> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
>> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: data = 38
>> row # 38 top key : `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
>> 17 12:24:03,298 [rfile.RFile] DEBUG: Getting block offset=1620010 csize=108006 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
>> 17 12:24:03,298 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,298 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,299 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
>> 17 12:24:03,299 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
>> 17 12:24:03,316 [rfile.RFile] DEBUG: Getting block offset=16 csize=107979 rsize=131093 entries=1 key=\wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
>> 17 12:24:03,316 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,317 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,319 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEc
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: Read ts 1334170440146
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = 46
>> row # 46 top key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
>> 17 12:24:03,321 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
>> 17 12:24:03,322 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,322 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
>> 17 12:24:03,352 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
>> 17 12:24:03,352 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,353 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,353 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
>> 17 12:24:03,355 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,355 [rfile.RelativeKey] DEBUG: data = 58
>> row # 58 top key : fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
>> 17 12:24:03,355 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:24:03,356 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,356 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
>> 17 12:24:03,373 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
>> 17 12:24:03,373 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,374 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,374 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,375 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data = 61
>> row # 61 top key : d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
>> 17 12:24:03,383 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
>> 17 12:24:03,385 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,385 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
>> 17 12:24:03,405 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
>> 17 12:24:03,406 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,407 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,407 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,407 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data = 69
>> row # 69 top key : `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
>> 17 12:24:03,411 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
>> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
>> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
>> 17 12:24:03,426 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
>> 17 12:24:03,426 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,427 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,427 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
>> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
>> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: data = 72
>> row # 72 top key : ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
>> 17 12:24:03,432 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
>> 17 12:24:03,433 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,433 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,434 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
>> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
>> 17 12:24:03,462 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:24:03,462 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,463 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,463 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,464 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
>> 17 12:24:03,464 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,464 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
>> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: data = 84
>> row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:24:03,486 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
>> 17 12:24:03,486 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,486 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,487 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,487 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
>> 17 12:24:03,487 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data = 92
>> row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
>> 17 12:24:03,491 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
>> 17 12:24:03,492 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,492 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,492 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
>> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
>> 17 12:24:03,504 [rfile.RFile] DEBUG: Getting block offset=1620010 csize=108006 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
>> 17 12:24:03,505 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,505 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,505 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = 95
>> row # 95 top key : a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
>> 17 12:24:03,516 [rfile.RFile] DEBUG: Getting block offset=2376219 csize=108006 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
>> 17 12:24:03,517 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,517 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
>> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,519 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
>> 17 12:24:03,535 [rfile.RFile] DEBUG: Getting block offset=539977 csize=108001 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
>> 17 12:24:03,535 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:24:03,536 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,536 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
>> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
>> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: data = 98
>> row # 98 top key : ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
>> 17 12:24:03,545 [rfile.RFile] DEBUG: Getting block offset=1080062 csize=107959 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
>> 17 12:24:03,545 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:24:03,546 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:24:03,546 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
>> 17 12:24:03,546 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: data =
>> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: Read ts 1334170440112
>> 
>> No error. Output looks the same until the exception.
>> 
>> Out of curiosity I took advantage of MapR's NFS access and diff'ed the two files:
>> diff /tmp/F000000w.rf /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
>> 
>> No difference.
>> 
>> Since we know there is a record 92 after 84, I also tried this just to see what happens:
>> 
>> ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 92 file:///tmp/F000000w.rf
>> 17 12:32:36,433 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
>> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:32:36,664 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
>> 17 12:32:36,669 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:32:36,671 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:32:36,671 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:32:36,690 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
>> 17 12:32:36,690 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:32:36,690 [rfile.RelativeKey] DEBUG: data =
>> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: data =
>> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: data =
>> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
>> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: data = 92
>> row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
>> 17 12:32:36,696 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
>> 17 12:32:36,698 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:32:36,698 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data =
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data =
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data =
>> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
>> 
>> And using the file within the Hadoop file system:
>> 
>> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 92  /user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
>> 17 12:33:38,830 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
>> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>> 17 12:33:39,104 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
>> 17 12:33:39,108 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 17 12:33:39,113 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:33:39,114 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
>> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: data =
>> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: data =
>> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: data =
>> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
>> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: len = 2
>> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: data = 92
>> row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
>> 17 12:33:39,127 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
>> 17 12:33:39,133 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 17 12:33:39,133 [rfile.RelativeKey] DEBUG: len = 131072
>> 17 12:33:39,134 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
>> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: data =
>> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: data =
>> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
>> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: data =
>> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
>> 
>> Looks the same to me.
>> 
>> Your standalone test client is very useful. I've run it with MapR client file system tracing enabled to see if that helps to clarify the issue. I'm sharing that with MapR Engineering - the trace isn't secret or anything, I just don't think it would mean much to you.
>> 
>> Also, what sleep in the test can I increase to avoid the race condition that leads to the "table split" error? I ask because I'm trying to run the test w/o compression hoping that might clarify a few things but the false error just confuses things.
>> 
>> If you have any additional ideas or things you want to try, please let me know. I'm more than willing to run additional tests.  I will pursue in parallel with MapR engineering.
>> 
>> Once again, thanks for your help,
>> Keys
>> ________________________________
>> Keys Botzum
>> Senior Principal Technologist
>> WW Systems Engineering
>> kbotzum@maprtech.com
>> 443-718-0098
>> MapR Technologies
>> http://www.mapr.com
>> 
>> 
>> 
>> On Apr 17, 2012, at 2:57 PM, Keith Turner wrote:
>> 
>>> Ok,
>>> 
>>> I modified LargeRowDirectQuery to support the local fs and optionallly
>>> lookuping an individual row like it used to.  I pushed theses changes
>>> to github.  Can you try running the following two test?
>>> 
>>>  * Lookup just row 84    with   LargeRowDirectQuery 84
>>> .../t-0000007/F000000w.rf
>>>  * Copy file to local fs and run LargeRowDirectQuery file:///tmp/F000000w.rf
>>> 
>>> The "table spit points out of range error" is just a timing issue w/
>>> the test.  I see that w/ HDFS sometimes.  There is a sleep in the test
>>> and sometimes its not long enough.   Thats ok.
>>> 
>>> Keith
>> 


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
I think it was failing for 84.  It get some data for row 84 from a
block, but its not sure if the next block contains more data for that
row (it does not) so it looks and when it does that fails.

The fact that you can query all rows in the local fs show that the
file in DFS is sounds.  Also, the fact that you can query the
individual row in DFS shows the file is sound.  The file is not
corrupt, but corruption seems to occur under certain circumstances
when reading the file.

The rows that are being looked up in the file are not in sorted order.
 So we are reading randomly from the file.  So it seems that seeking
the file around alot randomly seems to eventually leave something in
MapR in a bad state.  At this point I feel the rfile code is ok and
that it seems like the problem is w/ MapR, because it worked in local
fs and w/ single row in dfs.  What do you think, does this deduction
sound reasonable?

There is one thing that is not being printed out, that is when it
reads the rfile index.  In this case that is there its failing.  The
following information comes from the index, but we are not printing
any debug about reading the index blocks.

  .... DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 .....

You can see in the stack trace that failed that its was trying to read
an index block.

  at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.getIndexBlock(MultiLevelIndex.java:657)

Some of the previous failures you sent me occurred when it was trying
to read a data block, not an index block.

At this point I am not sure what else I can do.  Let me know if you
have any more questions.

Keith


On Tue, Apr 17, 2012 at 6:19 PM, Keys Botzum <kb...@maprtech.com> wrote:
> Keith,
>
> Great idea. Here are the results.
>
> Run with just record 84 against Hadoop file system:
>
> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 84  /user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
> 17 12:22:22,428 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:22:22,685 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:22:22,690 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:22:22,696 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:22:22,696 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:22:22,718 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: data =
> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: data =
> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: data =
> 17 12:22:22,719 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
> 17 12:22:22,720 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:22:22,720 [rfile.RelativeKey] DEBUG: data = 84
> row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
>
> Notice no exception. When I ran it against the entire file, here's the last bit of output:
>
> 17 12:21:39,605 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:21:39,605 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:21:39,611 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:21:39,611 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:21:39,612 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: data =
> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: data =
> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: data =
> 17 12:21:39,613 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
> 17 12:21:39,614 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:21:39,614 [rfile.RelativeKey] DEBUG: data = 84
> row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> Thread "org.apache.accumulo.server.test.functional.LargeRowDirectQuery" died null
> java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: invalid distance too far back
>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>        at java.io.DataInputStream.readInt(DataInputStream.java:370)
>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$IndexBlock.readFields(MultiLevelIndex.java:256)
>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.getIndexBlock(MultiLevelIndex.java:657)
>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.access$100(MultiLevelIndex.java:430)
>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.lookup(MultiLevelIndex.java:477)
>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.access$400(MultiLevelIndex.java:436)
>        at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.lookup(MultiLevelIndex.java:665)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._seek(RFile.java:700)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.seek(RFile.java:616)
>        at org.apache.accumulo.core.file.rfile.RFile$Reader.seek(RFile.java:1026)
>        at org.apache.accumulo.server.test.functional.LargeRowDirectQuery.main(LargeRowDirectQuery.java:96)
>        ... 6 more
>
>
> That seems to imply that the read error is just after row #84.
>
> I then copied the file out of the file system onto a normal linux file system and ran the test requested:
>
> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ cp /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf /tmp
>
> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery file:///tmp/F000000w.rf
> 17 12:24:02,893 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:24:03,039 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
> 17 12:24:03,042 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,043 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,043 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,049 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q:
> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,050 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
> 17 12:24:03,051 [rfile.RelativeKey] DEBUG: len = 1
> 17 12:24:03,051 [rfile.RelativeKey] DEBUG: data = 1
> row # 1 top key : f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
> 17 12:24:03,055 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
> 17 12:24:03,056 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,056 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,061 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
> 17 12:24:03,125 [rfile.RFile] DEBUG: Getting block offset=2376219 csize=108006 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
> 17 12:24:03,126 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,126 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,127 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,127 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 1
> 17 12:24:03,128 [rfile.RelativeKey] DEBUG: data = 9
> row # 9 top key : bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
> 17 12:24:03,129 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
> 17 12:24:03,130 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,130 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,131 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
> 17 12:24:03,160 [rfile.RFile] DEBUG: Getting block offset=1080062 csize=107959 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
> 17 12:24:03,160 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,161 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,163 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,164 [rfile.RelativeKey] DEBUG: Read ts 1334170440112
> 17 12:24:03,165 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,165 [rfile.RelativeKey] DEBUG: data = 12
> row # 12 top key : _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
> 17 12:24:03,165 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
> 17 12:24:03,166 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,166 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,167 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,168 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
> 17 12:24:03,190 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
> 17 12:24:03,190 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,191 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,191 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
> 17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,193 [rfile.RelativeKey] DEBUG: data = 15
> row # 15 top key : ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
> 17 12:24:03,193 [rfile.RFile] DEBUG: Getting block offset=539977 csize=108001 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
> 17 12:24:03,194 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,194 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,195 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
> 17 12:24:03,240 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
> 17 12:24:03,240 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,241 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,241 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,242 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
> 17 12:24:03,243 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,243 [rfile.RelativeKey] DEBUG: data = 32
> row # 32 top key : eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
> 17 12:24:03,243 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
> 17 12:24:03,244 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,244 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q:
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,245 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
> 17 12:24:03,267 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
> 17 12:24:03,267 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,268 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,268 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,270 [rfile.RelativeKey] DEBUG: data = 35
> row # 35 top key : cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
> 17 12:24:03,271 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
> 17 12:24:03,271 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,272 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,272 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
> 17 12:24:03,272 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,273 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
> 17 12:24:03,294 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
> 17 12:24:03,294 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,295 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,295 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,296 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,297 [rfile.RelativeKey] DEBUG: data = 38
> row # 38 top key : `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
> 17 12:24:03,298 [rfile.RFile] DEBUG: Getting block offset=1620010 csize=108006 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
> 17 12:24:03,298 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,298 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,299 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
> 17 12:24:03,299 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,300 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
> 17 12:24:03,316 [rfile.RFile] DEBUG: Getting block offset=16 csize=107979 rsize=131093 entries=1 key=\wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
> 17 12:24:03,316 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,317 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,319 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEc
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: Read ts 1334170440146
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = 46
> row # 46 top key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
> 17 12:24:03,321 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
> 17 12:24:03,322 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,322 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,323 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
> 17 12:24:03,352 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
> 17 12:24:03,352 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,353 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,353 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,354 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
> 17 12:24:03,355 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,355 [rfile.RelativeKey] DEBUG: data = 58
> row # 58 top key : fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
> 17 12:24:03,355 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:24:03,356 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,356 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,357 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
> 17 12:24:03,373 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
> 17 12:24:03,373 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,374 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,374 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,375 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,376 [rfile.RelativeKey] DEBUG: data = 61
> row # 61 top key : d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
> 17 12:24:03,383 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
> 17 12:24:03,385 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,385 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,386 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
> 17 12:24:03,405 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
> 17 12:24:03,406 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,407 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,407 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,407 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,408 [rfile.RelativeKey] DEBUG: data = 69
> row # 69 top key : `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
> 17 12:24:03,411 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
> 17 12:24:03,412 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,413 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
> 17 12:24:03,426 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
> 17 12:24:03,426 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,427 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,427 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,428 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,429 [rfile.RelativeKey] DEBUG: data = 72
> row # 72 top key : ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
> 17 12:24:03,432 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
> 17 12:24:03,433 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,433 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,434 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,435 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
> 17 12:24:03,462 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:24:03,462 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,463 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,463 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,464 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
> 17 12:24:03,464 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,464 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,465 [rfile.RelativeKey] DEBUG: data = 84
> row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:24:03,486 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
> 17 12:24:03,486 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,486 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,487 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,487 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
> 17 12:24:03,487 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,488 [rfile.RelativeKey] DEBUG: data = 92
> row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
> 17 12:24:03,491 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
> 17 12:24:03,492 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,492 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,492 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,493 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
> 17 12:24:03,504 [rfile.RFile] DEBUG: Getting block offset=1620010 csize=108006 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
> 17 12:24:03,505 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,505 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,505 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = 95
> row # 95 top key : a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
> 17 12:24:03,516 [rfile.RFile] DEBUG: Getting block offset=2376219 csize=108006 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
> 17 12:24:03,517 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,517 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,518 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,519 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
> 17 12:24:03,535 [rfile.RFile] DEBUG: Getting block offset=539977 csize=108001 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
> 17 12:24:03,535 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:24:03,536 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,536 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,538 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:24:03,539 [rfile.RelativeKey] DEBUG: data = 98
> row # 98 top key : ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
> 17 12:24:03,545 [rfile.RFile] DEBUG: Getting block offset=1080062 csize=107959 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
> 17 12:24:03,545 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:24:03,546 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:24:03,546 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
> 17 12:24:03,546 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: data =
> 17 12:24:03,547 [rfile.RelativeKey] DEBUG: Read ts 1334170440112
>
> No error. Output looks the same until the exception.
>
> Out of curiosity I took advantage of MapR's NFS access and diff'ed the two files:
> diff /tmp/F000000w.rf /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
>
> No difference.
>
> Since we know there is a record 92 after 84, I also tried this just to see what happens:
>
> ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 92 file:///tmp/F000000w.rf
> 17 12:32:36,433 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:32:36,664 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
> 17 12:32:36,669 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:32:36,671 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:32:36,671 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:32:36,690 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
> 17 12:32:36,690 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:32:36,690 [rfile.RelativeKey] DEBUG: data =
> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: data =
> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: data =
> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:32:36,691 [rfile.RelativeKey] DEBUG: data = 92
> row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
> 17 12:32:36,696 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
> 17 12:32:36,698 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:32:36,698 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data =
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data =
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: data =
> 17 12:32:36,707 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
>
> And using the file within the Hadoop file system:
>
> /opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 92  /user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
> 17 12:33:38,830 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
> first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
> last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
> 17 12:33:39,104 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
> 17 12:33:39,108 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 17 12:33:39,113 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:33:39,114 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: data =
> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: data =
> 17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: data =
> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: len = 2
> 17 12:33:39,123 [rfile.RelativeKey] DEBUG: data = 92
> row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
> 17 12:33:39,127 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
> 17 12:33:39,133 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 17 12:33:39,133 [rfile.RelativeKey] DEBUG: len = 131072
> 17 12:33:39,134 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: data =
> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: data =
> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: data =
> 17 12:33:39,135 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
>
> Looks the same to me.
>
> Your standalone test client is very useful. I've run it with MapR client file system tracing enabled to see if that helps to clarify the issue. I'm sharing that with MapR Engineering - the trace isn't secret or anything, I just don't think it would mean much to you.
>
> Also, what sleep in the test can I increase to avoid the race condition that leads to the "table split" error? I ask because I'm trying to run the test w/o compression hoping that might clarify a few things but the false error just confuses things.
>
> If you have any additional ideas or things you want to try, please let me know. I'm more than willing to run additional tests.  I will pursue in parallel with MapR engineering.
>
> Once again, thanks for your help,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 17, 2012, at 2:57 PM, Keith Turner wrote:
>
>> Ok,
>>
>> I modified LargeRowDirectQuery to support the local fs and optionallly
>> lookuping an individual row like it used to.  I pushed theses changes
>> to github.  Can you try running the following two test?
>>
>>  * Lookup just row 84    with   LargeRowDirectQuery 84
>> .../t-0000007/F000000w.rf
>>  * Copy file to local fs and run LargeRowDirectQuery file:///tmp/F000000w.rf
>>
>> The "table spit points out of range error" is just a timing issue w/
>> the test.  I see that w/ HDFS sometimes.  There is a sleep in the test
>> and sometimes its not long enough.   Thats ok.
>>
>> Keith
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,

Great idea. Here are the results. 

Run with just record 84 against Hadoop file system:

/opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 84  /user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
17 12:22:22,428 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:22:22,685 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:22:22,690 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:22:22,696 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:22:22,696 [rfile.RelativeKey] DEBUG: len = 131072
17 12:22:22,718 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
17 12:22:22,719 [rfile.RelativeKey] DEBUG: data = 
17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
17 12:22:22,719 [rfile.RelativeKey] DEBUG: data = 
17 12:22:22,719 [rfile.RelativeKey] DEBUG: len = 0
17 12:22:22,719 [rfile.RelativeKey] DEBUG: data = 
17 12:22:22,719 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 12:22:22,720 [rfile.RelativeKey] DEBUG: len = 2
17 12:22:22,720 [rfile.RelativeKey] DEBUG: data = 84
row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false

Notice no exception. When I ran it against the entire file, here's the last bit of output:

17 12:21:39,605 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:21:39,605 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:21:39,611 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:21:39,611 [rfile.RelativeKey] DEBUG: len = 131072
17 12:21:39,612 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
17 12:21:39,613 [rfile.RelativeKey] DEBUG: data = 
17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
17 12:21:39,613 [rfile.RelativeKey] DEBUG: data = 
17 12:21:39,613 [rfile.RelativeKey] DEBUG: len = 0
17 12:21:39,613 [rfile.RelativeKey] DEBUG: data = 
17 12:21:39,613 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 12:21:39,614 [rfile.RelativeKey] DEBUG: len = 2
17 12:21:39,614 [rfile.RelativeKey] DEBUG: data = 84
row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
Thread "org.apache.accumulo.server.test.functional.LargeRowDirectQuery" died null
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.accumulo.start.Main$1.run(Main.java:89)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: invalid distance too far back
	at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
	at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
	at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
	at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
	at java.io.FilterInputStream.read(FilterInputStream.java:66)
	at java.io.DataInputStream.readInt(DataInputStream.java:370)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$IndexBlock.readFields(MultiLevelIndex.java:256)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.getIndexBlock(MultiLevelIndex.java:657)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.access$100(MultiLevelIndex.java:430)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.lookup(MultiLevelIndex.java:477)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.access$400(MultiLevelIndex.java:436)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.lookup(MultiLevelIndex.java:665)
	at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._seek(RFile.java:700)
	at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.seek(RFile.java:616)
	at org.apache.accumulo.core.file.rfile.RFile$Reader.seek(RFile.java:1026)
	at org.apache.accumulo.server.test.functional.LargeRowDirectQuery.main(LargeRowDirectQuery.java:96)
	... 6 more


That seems to imply that the read error is just after row #84.

I then copied the file out of the file system onto a normal linux file system and ran the test requested:

/opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ cp /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf /tmp

/opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery file:///tmp/F000000w.rf 
17 12:24:02,893 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:24:03,039 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
17 12:24:03,042 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,043 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,043 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,049 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 
17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,050 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,050 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,050 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,050 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,050 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
17 12:24:03,051 [rfile.RelativeKey] DEBUG: len = 1
17 12:24:03,051 [rfile.RelativeKey] DEBUG: data = 1
row # 1 top key : f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
17 12:24:03,055 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
17 12:24:03,056 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,056 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,061 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,061 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,061 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,061 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,061 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,061 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
17 12:24:03,125 [rfile.RFile] DEBUG: Getting block offset=2376219 csize=108006 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
17 12:24:03,126 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,126 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,127 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,127 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,128 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,128 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,128 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,128 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
17 12:24:03,128 [rfile.RelativeKey] DEBUG: len = 1
17 12:24:03,128 [rfile.RelativeKey] DEBUG: data = 9
row # 9 top key : bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
17 12:24:03,129 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 12:24:03,130 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,130 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,131 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,131 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,131 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,131 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,131 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,131 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
17 12:24:03,160 [rfile.RFile] DEBUG: Getting block offset=1080062 csize=107959 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
17 12:24:03,160 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,161 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,163 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,164 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,164 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,164 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,164 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,164 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,164 [rfile.RelativeKey] DEBUG: Read ts 1334170440112
17 12:24:03,165 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,165 [rfile.RelativeKey] DEBUG: data = 12
row # 12 top key : _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
17 12:24:03,165 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
17 12:24:03,166 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,166 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,167 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
17 12:24:03,167 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,167 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,167 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,168 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,168 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,168 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,168 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
17 12:24:03,190 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 12:24:03,190 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,191 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,191 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,192 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,192 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,192 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,192 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,192 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
17 12:24:03,192 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,193 [rfile.RelativeKey] DEBUG: data = 15
row # 15 top key : ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 12:24:03,193 [rfile.RFile] DEBUG: Getting block offset=539977 csize=108001 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
17 12:24:03,194 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,194 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,195 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,195 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,195 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,195 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,195 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,195 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
17 12:24:03,240 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
17 12:24:03,240 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,241 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,241 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,242 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,242 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,242 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,242 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,242 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,242 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
17 12:24:03,243 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,243 [rfile.RelativeKey] DEBUG: data = 32
row # 32 top key : eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
17 12:24:03,243 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
17 12:24:03,244 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,244 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,245 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 
17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,245 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,245 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,245 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,245 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,245 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
17 12:24:03,267 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 12:24:03,267 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,268 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,268 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,269 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,269 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,269 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,269 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,270 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,270 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
17 12:24:03,270 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,270 [rfile.RelativeKey] DEBUG: data = 35
row # 35 top key : cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 12:24:03,271 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
17 12:24:03,271 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,272 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,272 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
17 12:24:03,272 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,273 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,273 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,273 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,273 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,273 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,273 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
17 12:24:03,294 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
17 12:24:03,294 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,295 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,295 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,296 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
17 12:24:03,296 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,296 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,296 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,296 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,297 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,297 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,297 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
17 12:24:03,297 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,297 [rfile.RelativeKey] DEBUG: data = 38
row # 38 top key : `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
17 12:24:03,298 [rfile.RFile] DEBUG: Getting block offset=1620010 csize=108006 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
17 12:24:03,298 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,298 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,299 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
17 12:24:03,299 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,300 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,300 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,300 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,300 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,300 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,300 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
17 12:24:03,316 [rfile.RFile] DEBUG: Getting block offset=16 csize=107979 rsize=131093 entries=1 key=\wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
17 12:24:03,316 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,317 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,319 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEc
17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,320 [rfile.RelativeKey] DEBUG: Read ts 1334170440146
17 12:24:03,320 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,320 [rfile.RelativeKey] DEBUG: data = 46
row # 46 top key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
17 12:24:03,321 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
17 12:24:03,322 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,322 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,323 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,323 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,323 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,323 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,323 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,323 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
17 12:24:03,352 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
17 12:24:03,352 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,353 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,353 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,354 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,354 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,354 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,354 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,354 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,354 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
17 12:24:03,355 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,355 [rfile.RelativeKey] DEBUG: data = 58
row # 58 top key : fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
17 12:24:03,355 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:24:03,356 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,356 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,357 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,357 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,357 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,357 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,357 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,357 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 12:24:03,373 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
17 12:24:03,373 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,374 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,374 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,375 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,376 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,376 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,376 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,376 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
17 12:24:03,376 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,376 [rfile.RelativeKey] DEBUG: data = 61
row # 61 top key : d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
17 12:24:03,383 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
17 12:24:03,385 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,385 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,386 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,386 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,386 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,386 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,386 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,386 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
17 12:24:03,405 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
17 12:24:03,406 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,407 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,407 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,407 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,408 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,408 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,408 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,408 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
17 12:24:03,408 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,408 [rfile.RelativeKey] DEBUG: data = 69
row # 69 top key : `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
17 12:24:03,411 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
17 12:24:03,412 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,412 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,412 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
17 12:24:03,412 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,413 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,413 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,413 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,413 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,413 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,413 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
17 12:24:03,426 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
17 12:24:03,426 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,427 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,427 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,428 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
17 12:24:03,428 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,428 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,428 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,429 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,429 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,429 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,429 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
17 12:24:03,429 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,429 [rfile.RelativeKey] DEBUG: data = 72
row # 72 top key : ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
17 12:24:03,432 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 12:24:03,433 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,433 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,434 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,435 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,435 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,435 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,435 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,435 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
17 12:24:03,462 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:24:03,462 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,463 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,463 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,464 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 12:24:03,464 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,464 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,465 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,465 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,465 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 12:24:03,465 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,465 [rfile.RelativeKey] DEBUG: data = 84
row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:24:03,486 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 12:24:03,486 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,486 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,487 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,487 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
17 12:24:03,487 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,488 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,488 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,488 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,488 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
17 12:24:03,488 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,488 [rfile.RelativeKey] DEBUG: data = 92
row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 12:24:03,491 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 12:24:03,492 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,492 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,492 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,493 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,493 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,493 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,493 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,493 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
17 12:24:03,504 [rfile.RFile] DEBUG: Getting block offset=1620010 csize=108006 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
17 12:24:03,505 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,505 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,505 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,507 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
17 12:24:03,507 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,507 [rfile.RelativeKey] DEBUG: data = 95
row # 95 top key : a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
17 12:24:03,516 [rfile.RFile] DEBUG: Getting block offset=2376219 csize=108006 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
17 12:24:03,517 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,517 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,518 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,518 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,518 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,518 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,518 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,519 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
17 12:24:03,535 [rfile.RFile] DEBUG: Getting block offset=539977 csize=108001 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
17 12:24:03,535 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:24:03,536 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,536 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,538 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
17 12:24:03,538 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,538 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,538 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,538 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,539 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,539 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,539 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
17 12:24:03,539 [rfile.RelativeKey] DEBUG: len = 2
17 12:24:03,539 [rfile.RelativeKey] DEBUG: data = 98
row # 98 top key : ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
17 12:24:03,545 [rfile.RFile] DEBUG: Getting block offset=1080062 csize=107959 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
17 12:24:03,545 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:24:03,546 [rfile.RelativeKey] DEBUG: len = 131072
17 12:24:03,546 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
17 12:24:03,546 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,547 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,547 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,547 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,547 [rfile.RelativeKey] DEBUG: len = 0
17 12:24:03,547 [rfile.RelativeKey] DEBUG: data = 
17 12:24:03,547 [rfile.RelativeKey] DEBUG: Read ts 1334170440112

No error. Output looks the same until the exception.

Out of curiosity I took advantage of MapR's NFS access and diff'ed the two files: 
diff /tmp/F000000w.rf /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf 

No difference.

Since we know there is a record 92 after 84, I also tried this just to see what happens:

./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 92 file:///tmp/F000000w.rf 
17 12:32:36,433 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:32:36,664 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 12:32:36,669 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:32:36,671 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:32:36,671 [rfile.RelativeKey] DEBUG: len = 131072
17 12:32:36,690 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
17 12:32:36,690 [rfile.RelativeKey] DEBUG: len = 0
17 12:32:36,690 [rfile.RelativeKey] DEBUG: data = 
17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 0
17 12:32:36,691 [rfile.RelativeKey] DEBUG: data = 
17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 0
17 12:32:36,691 [rfile.RelativeKey] DEBUG: data = 
17 12:32:36,691 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
17 12:32:36,691 [rfile.RelativeKey] DEBUG: len = 2
17 12:32:36,691 [rfile.RelativeKey] DEBUG: data = 92
row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 12:32:36,696 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 12:32:36,698 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:32:36,698 [rfile.RelativeKey] DEBUG: len = 131072
17 12:32:36,707 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
17 12:32:36,707 [rfile.RelativeKey] DEBUG: data = 
17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
17 12:32:36,707 [rfile.RelativeKey] DEBUG: data = 
17 12:32:36,707 [rfile.RelativeKey] DEBUG: len = 0
17 12:32:36,707 [rfile.RelativeKey] DEBUG: data = 
17 12:32:36,707 [rfile.RelativeKey] DEBUG: Read ts 1334170440135

And using the file within the Hadoop file system:

/opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 92  /user/mapr/accumulo-SE-test-04-8496/tables/2/t-0000007/F000000w.rf
17 12:33:38,830 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 12:33:39,104 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 12:33:39,108 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 12:33:39,113 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:33:39,114 [rfile.RelativeKey] DEBUG: len = 131072
17 12:33:39,122 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
17 12:33:39,122 [rfile.RelativeKey] DEBUG: data = 
17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
17 12:33:39,122 [rfile.RelativeKey] DEBUG: data = 
17 12:33:39,122 [rfile.RelativeKey] DEBUG: len = 0
17 12:33:39,123 [rfile.RelativeKey] DEBUG: data = 
17 12:33:39,123 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
17 12:33:39,123 [rfile.RelativeKey] DEBUG: len = 2
17 12:33:39,123 [rfile.RelativeKey] DEBUG: data = 92
row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 12:33:39,127 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 12:33:39,133 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 12:33:39,133 [rfile.RelativeKey] DEBUG: len = 131072
17 12:33:39,134 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
17 12:33:39,135 [rfile.RelativeKey] DEBUG: data = 
17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
17 12:33:39,135 [rfile.RelativeKey] DEBUG: data = 
17 12:33:39,135 [rfile.RelativeKey] DEBUG: len = 0
17 12:33:39,135 [rfile.RelativeKey] DEBUG: data = 
17 12:33:39,135 [rfile.RelativeKey] DEBUG: Read ts 1334170440135

Looks the same to me.

Your standalone test client is very useful. I've run it with MapR client file system tracing enabled to see if that helps to clarify the issue. I'm sharing that with MapR Engineering - the trace isn't secret or anything, I just don't think it would mean much to you.

Also, what sleep in the test can I increase to avoid the race condition that leads to the "table split" error? I ask because I'm trying to run the test w/o compression hoping that might clarify a few things but the false error just confuses things.

If you have any additional ideas or things you want to try, please let me know. I'm more than willing to run additional tests.  I will pursue in parallel with MapR engineering.

Once again, thanks for your help,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 17, 2012, at 2:57 PM, Keith Turner wrote:

> Ok,
> 
> I modified LargeRowDirectQuery to support the local fs and optionallly
> lookuping an individual row like it used to.  I pushed theses changes
> to github.  Can you try running the following two test?
> 
>  * Lookup just row 84    with   LargeRowDirectQuery 84
> .../t-0000007/F000000w.rf
>  * Copy file to local fs and run LargeRowDirectQuery file:///tmp/F000000w.rf
> 
> The "table spit points out of range error" is just a timing issue w/
> the test.  I see that w/ HDFS sometimes.  There is a sleep in the test
> and sometimes its not long enough.   Thats ok.
> 
> Keith


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
Ok,

I modified LargeRowDirectQuery to support the local fs and optionallly
lookuping an individual row like it used to.  I pushed theses changes
to github.  Can you try running the following two test?

  * Lookup just row 84    with   LargeRowDirectQuery 84
.../t-0000007/F000000w.rf
  * Copy file to local fs and run LargeRowDirectQuery file:///tmp/F000000w.rf

The "table spit points out of range error" is just a timing issue w/
the test.  I see that w/ HDFS sometimes.  There is a sleep in the test
and sometimes its not long enough.   Thats ok.

Keith

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,

Once again I thank you for your help. It is much appreciated.  I ran the test you suggested on the existing table and here is the outcome. I ran it three times and the result was always exactly the same - an IOException trying to read block 84.

/opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery  /user/mapr/accumulo-SE-test-04-28878/tables/2/t-0000007/F000000w.rf 
17 10:10:43,981 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
first key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
last key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 10:10:44,171 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
17 10:10:44,176 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:44,184 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,185 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,193 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 
17 10:10:44,193 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,193 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,194 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,194 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,194 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,194 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,194 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
17 10:10:44,194 [rfile.RelativeKey] DEBUG: len = 1
17 10:10:44,194 [rfile.RelativeKey] DEBUG: data = 1
row # 1 top key : f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
17 10:10:44,200 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
17 10:10:44,207 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,207 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,208 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
17 10:10:44,208 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,209 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,209 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,209 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,209 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,209 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,209 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
17 10:10:44,425 [rfile.RFile] DEBUG: Getting block offset=2376219 csize=108006 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
17 10:10:44,426 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:44,433 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,433 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,434 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
17 10:10:44,434 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,434 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,434 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,434 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,434 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,434 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,435 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
17 10:10:44,435 [rfile.RelativeKey] DEBUG: len = 1
17 10:10:44,435 [rfile.RelativeKey] DEBUG: data = 9
row # 9 top key : bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
17 10:10:44,435 [rfile.RFile] DEBUG: Getting block offset=2484225 csize=107991 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 10:10:44,441 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,441 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,442 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
17 10:10:44,442 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,443 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,443 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,443 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,443 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,443 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,443 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
17 10:10:44,605 [rfile.RFile] DEBUG: Getting block offset=1080062 csize=107959 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
17 10:10:44,606 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:44,612 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,612 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,613 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
17 10:10:44,614 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,614 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,614 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,614 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,614 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,614 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,614 [rfile.RelativeKey] DEBUG: Read ts 1334170440112
17 10:10:44,614 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:44,614 [rfile.RelativeKey] DEBUG: data = 12
row # 12 top key : _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
17 10:10:44,615 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
17 10:10:44,621 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,621 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,622 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
17 10:10:44,622 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,622 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,622 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,622 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,623 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,623 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,623 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
17 10:10:44,678 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 10:10:44,678 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:44,685 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,685 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,686 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
17 10:10:44,686 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,686 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,686 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,686 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,686 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,686 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,686 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
17 10:10:44,686 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:44,686 [rfile.RelativeKey] DEBUG: data = 15
row # 15 top key : ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 10:10:44,687 [rfile.RFile] DEBUG: Getting block offset=539977 csize=108001 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
17 10:10:44,688 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,688 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,688 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
17 10:10:44,688 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,689 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,689 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,689 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,689 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,689 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,689 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
17 10:10:44,816 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
17 10:10:44,816 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:44,820 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,821 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,822 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
17 10:10:44,822 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,822 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,822 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,822 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,822 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,822 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,822 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
17 10:10:44,822 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:44,823 [rfile.RelativeKey] DEBUG: data = 32
row # 32 top key : eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
17 10:10:44,823 [rfile.RFile] DEBUG: Getting block offset=3564353 csize=107995 rsize=131092 entries=1 key=f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 7CgMi)$q*Yz[30/HbC<Jb![2PV(<zu%2... TRUNCATED : [] 1334170440101 false
17 10:10:44,828 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,828 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,829 [rfile.RelativeKey] DEBUG: data = f,RZuff_>.36RjOcx05Y1^qA'g'$@Q: 
17 10:10:44,829 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,829 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,829 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,830 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,830 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,830 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,830 [rfile.RelativeKey] DEBUG: Read ts 1334170440101
17 10:10:44,888 [rfile.RFile] DEBUG: Getting block offset=2808288 csize=107987 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 10:10:44,888 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:44,893 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,893 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,894 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
17 10:10:44,895 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,895 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,895 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,895 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,895 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,895 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,895 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
17 10:10:44,896 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:44,896 [rfile.RelativeKey] DEBUG: data = 35
row # 35 top key : cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 10:10:44,897 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
17 10:10:44,901 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,901 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,902 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
17 10:10:44,902 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,902 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,903 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,903 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,903 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,903 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,903 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
17 10:10:44,958 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
17 10:10:44,958 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:44,963 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,964 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,964 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
17 10:10:44,964 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,965 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,965 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,965 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,965 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,965 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,965 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
17 10:10:44,965 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:44,965 [rfile.RelativeKey] DEBUG: data = 38
row # 38 top key : `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
17 10:10:44,966 [rfile.RFile] DEBUG: Getting block offset=1620010 csize=108006 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
17 10:10:44,967 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:44,967 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:44,968 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
17 10:10:44,968 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,968 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,968 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,968 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,968 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:44,968 [rfile.RelativeKey] DEBUG: data = 
17 10:10:44,968 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
17 10:10:45,004 [rfile.RFile] DEBUG: Getting block offset=16 csize=107979 rsize=131093 entries=1 key=\wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
17 10:10:45,004 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:45,013 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,013 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,015 [rfile.RelativeKey] DEBUG: data = \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEc
17 10:10:45,015 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,015 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,015 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,015 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,015 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,015 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,015 [rfile.RelativeKey] DEBUG: Read ts 1334170440146
17 10:10:45,016 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:45,016 [rfile.RelativeKey] DEBUG: data = 46
row # 46 top key : \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334170440146 false
17 10:10:45,017 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
17 10:10:45,021 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,021 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,022 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
17 10:10:45,023 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,023 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,023 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,023 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,023 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,023 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,023 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
17 10:10:45,097 [rfile.RFile] DEBUG: Getting block offset=3888361 csize=107960 rsize=131093 entries=1 key=fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
17 10:10:45,097 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:45,101 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,101 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,102 [rfile.RelativeKey] DEBUG: data = fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl
17 10:10:45,103 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,103 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,103 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,103 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,103 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,103 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,104 [rfile.RelativeKey] DEBUG: Read ts 1334170440158
17 10:10:45,104 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:45,104 [rfile.RelativeKey] DEBUG: data = 58
row # 58 top key : fXK1$f#LZUJbYP@0AZ)1d H,LE>WRePl3F#2CD412Gm.-]%[:6CJe+7[a0#X%!u(... TRUNCATED : [] 1334170440158 false
17 10:10:45,105 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 10:10:45,110 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,110 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,111 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 10:10:45,111 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,111 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,111 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,111 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,111 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,111 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,112 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 10:10:45,151 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
17 10:10:45,152 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:45,156 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,157 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,158 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
17 10:10:45,158 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,158 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,158 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,158 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,159 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,159 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,159 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
17 10:10:45,159 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:45,159 [rfile.RelativeKey] DEBUG: data = 61
row # 61 top key : d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
17 10:10:45,178 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
17 10:10:45,183 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,183 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,184 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
17 10:10:45,184 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,184 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,185 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,185 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,185 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,185 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,185 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
17 10:10:45,223 [rfile.RFile] DEBUG: Getting block offset=1188021 csize=107982 rsize=131093 entries=1 key=`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
17 10:10:45,224 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:45,228 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,228 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,229 [rfile.RelativeKey] DEBUG: data = `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(
17 10:10:45,229 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,229 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,230 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,230 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,230 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,230 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,230 [rfile.RelativeKey] DEBUG: Read ts 1334170440169
17 10:10:45,230 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:45,230 [rfile.RelativeKey] DEBUG: data = 69
row # 69 top key : `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334170440169 false
17 10:10:45,240 [rfile.RFile] DEBUG: Getting block offset=1512018 csize=107992 rsize=131093 entries=1 key=`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334170440138 false
17 10:10:45,247 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,247 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,248 [rfile.RelativeKey] DEBUG: data = `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161
17 10:10:45,248 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,248 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,248 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,248 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,248 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,249 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,249 [rfile.RelativeKey] DEBUG: Read ts 1334170440138
17 10:10:45,284 [rfile.RFile] DEBUG: Getting block offset=107995 csize=107979 rsize=131093 entries=1 key=]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
17 10:10:45,284 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:45,288 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,288 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,289 [rfile.RelativeKey] DEBUG: data = ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,F
17 10:10:45,290 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,290 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,290 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,290 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,290 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,290 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,290 [rfile.RelativeKey] DEBUG: Read ts 1334170440172
17 10:10:45,291 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:45,291 [rfile.RelativeKey] DEBUG: data = 72
row # 72 top key : ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
17 10:10:45,300 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 10:10:45,308 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,308 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,309 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
17 10:10:45,310 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,310 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,310 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,310 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,310 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,310 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,310 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
17 10:10:45,359 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 10:10:45,360 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:10:45,364 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:10:45,364 [rfile.RelativeKey] DEBUG: len = 131072
17 10:10:45,366 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 10:10:45,366 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,366 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,366 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,366 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,366 [rfile.RelativeKey] DEBUG: len = 0
17 10:10:45,366 [rfile.RelativeKey] DEBUG: data = 
17 10:10:45,366 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 10:10:45,366 [rfile.RelativeKey] DEBUG: len = 2
17 10:10:45,366 [rfile.RelativeKey] DEBUG: data = 84
row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
Thread "org.apache.accumulo.server.test.functional.LargeRowDirectQuery" died null
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.accumulo.start.Main$1.run(Main.java:89)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: invalid distance too far back
	at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
	at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
	at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
	at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
	at java.io.FilterInputStream.read(FilterInputStream.java:66)
	at java.io.DataInputStream.readInt(DataInputStream.java:370)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$IndexBlock.readFields(MultiLevelIndex.java:256)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.getIndexBlock(MultiLevelIndex.java:657)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.access$100(MultiLevelIndex.java:430)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.lookup(MultiLevelIndex.java:477)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.access$400(MultiLevelIndex.java:436)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.lookup(MultiLevelIndex.java:665)
	at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._seek(RFile.java:700)
	at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.seek(RFile.java:616)
	at org.apache.accumulo.core.file.rfile.RFile$Reader.seek(RFile.java:1026)
	at org.apache.accumulo.server.test.functional.LargeRowDirectQuery.main(LargeRowDirectQuery.java:83)
	... 6 more


Just to check for consistent behavior, I reran the large row test to generate a new set of tables (failed in the usual place) and then ran your new test program to read through the tablet file. It also failed in the exact same place:

/opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery  /user/mapr/accumulo-SE-test-04-13876/tables/2/t-0000007/F000000w.rf 

…..
17 10:20:12,078 [rfile.RFile] DEBUG: Getting block offset=431981 csize=107996 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 10:20:12,086 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:20:12,086 [rfile.RelativeKey] DEBUG: len = 131072
17 10:20:12,087 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
17 10:20:12,087 [rfile.RelativeKey] DEBUG: len = 0
17 10:20:12,087 [rfile.RelativeKey] DEBUG: data = 
17 10:20:12,087 [rfile.RelativeKey] DEBUG: len = 0
17 10:20:12,088 [rfile.RelativeKey] DEBUG: data = 
17 10:20:12,088 [rfile.RelativeKey] DEBUG: len = 0
17 10:20:12,088 [rfile.RelativeKey] DEBUG: data = 

17 10:20:12,088 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
17 10:20:12,134 [rfile.RFile] DEBUG: Getting block offset=3996321 csize=107990 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 10:20:12,134 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:20:12,139 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:20:12,139 [rfile.RelativeKey] DEBUG: len = 131072
17 10:20:12,140 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 10:20:12,141 [rfile.RelativeKey] DEBUG: len = 0
17 10:20:12,141 [rfile.RelativeKey] DEBUG: data = 
17 10:20:12,141 [rfile.RelativeKey] DEBUG: len = 0
17 10:20:12,141 [rfile.RelativeKey] DEBUG: data = 
17 10:20:12,141 [rfile.RelativeKey] DEBUG: len = 0
17 10:20:12,141 [rfile.RelativeKey] DEBUG: data = 
17 10:20:12,141 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 10:20:12,141 [rfile.RelativeKey] DEBUG: len = 2
17 10:20:12,141 [rfile.RelativeKey] DEBUG: data = 84
row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
Thread "org.apache.accumulo.server.test.functional.LargeRowDirectQuery" died null
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.accumulo.start.Main$1.run(Main.java:89)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: invalid distance too far back
	at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
	at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
	at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
	at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
	at java.io.FilterInputStream.read(FilterInputStream.java:66)
	at java.io.DataInputStream.readInt(DataInputStream.java:370)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$IndexBlock.readFields(MultiLevelIndex.java:256)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.getIndexBlock(MultiLevelIndex.java:657)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.access$100(MultiLevelIndex.java:430)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.lookup(MultiLevelIndex.java:477)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader$Node.access$400(MultiLevelIndex.java:436)
	at org.apache.accumulo.core.file.rfile.MultiLevelIndex$Reader.lookup(MultiLevelIndex.java:665)
	at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._seek(RFile.java:700)
	at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.seek(RFile.java:616)
	at org.apache.accumulo.core.file.rfile.RFile$Reader.seek(RFile.java:1026)
	at org.apache.accumulo.server.test.functional.LargeRowDirectQuery.main(LargeRowDirectQuery.java:83)
	... 6 more



Out of curiosity I ran both tests again with compression in Accumulo disabled (it is already disabled in MapR). My thought here is that perhaps compression obscures some of the issue. I cannot explain this but when run this way, the large row test consistently fails with a "table spit points out of range error" on the client side with nothing obvious on the server. It doesn't look like it even created the tablets. What's odd is that we aren't seeing the EOF error which is what we saw when running this test before without compression in Accumulo 1.4.0 (and I just reran 1.4.0 to validate I'm still getting EOF). 

The only difference I could think of other than debug information is that I did change the java heap settings for the test driver by editing TestUtils.py per Eric's recommendation. So just to be sure, I put those back in. And now I get the usual EOFException in readFully(). Interesting. This seems to imply some kind of very subtle race condition that is impacted by the Java heap settings (likely GC behavior). Now here's the weird part. I then ran your test program over the tablet you suggested and here's the result - notice it appears to succeed:

mapr@SE-test-04:/opt/keith-turner-accumulo-1.4.0-MapR-c9d24ff/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery  /user/mapr/accumulo-SE-test-04-1054/tables/2/t-0000007/F000000w.rf 
….
row # 72 top key : ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334170440172 false
17 10:53:48,483 [rfile.RFile] DEBUG: Getting block offset=524443 csize=131093 rsize=131093 entries=1 key=]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334170440115 false
17 10:53:48,487 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,487 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,487 [rfile.RelativeKey] DEBUG: data = ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1M
17 10:53:48,487 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,488 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,488 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,488 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,488 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,488 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,488 [rfile.RelativeKey] DEBUG: Read ts 1334170440115
17 10:53:48,545 [rfile.RFile] DEBUG: Getting block offset=4851096 csize=131093 rsize=131093 entries=1 key=gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 10:53:48,545 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:53:48,551 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,551 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,552 [rfile.RelativeKey] DEBUG: data = gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(
17 10:53:48,552 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,552 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,552 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,552 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,552 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,552 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,552 [rfile.RelativeKey] DEBUG: Read ts 1334170440184
17 10:53:48,553 [rfile.RelativeKey] DEBUG: len = 2
17 10:53:48,553 [rfile.RelativeKey] DEBUG: data = 84
row # 84 top key : gj<=a%6]#O^TbgU+n%cv1hX*e0V$#<3(0g0NDE8C?8:HVK;\cZG&RK3pvYSYr!;`... TRUNCATED : [] 1334170440184 false
17 10:53:48,588 [rfile.RFile] DEBUG: Getting block offset=3015559 csize=131093 rsize=131093 entries=1 key=c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 10:53:48,588 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:53:48,594 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,594 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,595 [rfile.RelativeKey] DEBUG: data = c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_Nx
17 10:53:48,595 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,595 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,595 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,595 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,595 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,595 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,595 [rfile.RelativeKey] DEBUG: Read ts 1334170440192
17 10:53:48,595 [rfile.RelativeKey] DEBUG: len = 2
17 10:53:48,595 [rfile.RelativeKey] DEBUG: data = 92
row # 92 top key : c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334170440192 false
17 10:53:48,605 [rfile.RFile] DEBUG: Getting block offset=3408897 csize=131093 rsize=131093 entries=1 key=cMCOJH <GRW'' .A)R0W2`C5d"97e_/?@.e{:LUzSMRsL[5J3BB"_V=keDKe?/Zk... TRUNCATED : [] 1334170440135 false
17 10:53:48,609 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,610 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,610 [rfile.RelativeKey] DEBUG: data = cMCOJH <GRW'' .A)R0W2`C5d"97e_/?
17 10:53:48,610 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,610 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,610 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,610 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,610 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,610 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,610 [rfile.RelativeKey] DEBUG: Read ts 1334170440135
17 10:53:48,651 [rfile.RFile] DEBUG: Getting block offset=1966639 csize=131093 rsize=131093 entries=1 key=a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
17 10:53:48,651 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:53:48,660 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,660 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,661 [rfile.RelativeKey] DEBUG: data = a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4
17 10:53:48,661 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,661 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,661 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,661 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,662 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,662 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,662 [rfile.RelativeKey] DEBUG: Read ts 1334170440195
17 10:53:48,662 [rfile.RelativeKey] DEBUG: len = 2
17 10:53:48,662 [rfile.RelativeKey] DEBUG: data = 95
row # 95 top key : a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334170440195 false
17 10:53:48,688 [rfile.RFile] DEBUG: Getting block offset=2884467 csize=131092 rsize=131092 entries=1 key=bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334170440109 false
17 10:53:48,690 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,690 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,690 [rfile.RelativeKey] DEBUG: data = bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^
17 10:53:48,691 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,691 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,691 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,691 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,691 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,691 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,691 [rfile.RelativeKey] DEBUG: Read ts 1334170440109
17 10:53:48,728 [rfile.RFile] DEBUG: Getting block offset=655536 csize=131093 rsize=131093 entries=1 key=^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
17 10:53:48,728 [rfile.RelativeKey] DEBUG: entering fastSkip()
17 10:53:48,733 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,733 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,734 [rfile.RelativeKey] DEBUG: data = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
17 10:53:48,734 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,734 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,734 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,734 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,734 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,734 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,734 [rfile.RelativeKey] DEBUG: Read ts 1334170440198
17 10:53:48,735 [rfile.RelativeKey] DEBUG: len = 2
17 10:53:48,735 [rfile.RelativeKey] DEBUG: data = 98
row # 98 top key : ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440198 false
17 10:53:48,750 [rfile.RFile] DEBUG: Getting block offset=1311115 csize=131093 rsize=131093 entries=1 key=_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334170440112 false
17 10:53:48,754 [rfile.RelativeKey] DEBUG: fieldsSame = 0
17 10:53:48,754 [rfile.RelativeKey] DEBUG: len = 131072
17 10:53:48,755 [rfile.RelativeKey] DEBUG: data = _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsU
17 10:53:48,755 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,755 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,755 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,755 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,755 [rfile.RelativeKey] DEBUG: len = 0
17 10:53:48,755 [rfile.RelativeKey] DEBUG: data = 
17 10:53:48,755 [rfile.RelativeKey] DEBUG: Read ts 1334170440112

If you compare this "success" case to the earlier failure, all the outputs are identical except for the block offset (which I think is related to compression) and size until the failure. Our of curiosity I ran your test program over .../tables/2/t-000000[0-8]/* and all succeeded.

As another test, I left in the Java heap settings, but allowed compression to be used (by editing TestUtils.py to remove the property I added to disable compression).  In this case I get the usual IOException "invalid distance too far back" when I run the large row auto test.  If I run your test client to scan the table it gives the same error as the first test on row #84 - java.io.IOException: invalid distance too far back.



Something very strange is happening here. There is an overwhelming amount of information I can send you, what would be useful? I can run any of the following tests against any build you like:
- compression on, heap for tests large
- compression on, heap default (small) for tests
- compression off, heap for tests large
- compression on, heap default (small) for tests

Thanks,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 17, 2012, at 12:37 PM, Keith Turner wrote:

> The tablet server failed when it tried to do that read.  One thing of
> interest is that the numbers match up.  In the test program and the
> tablet server, the block offset and size is the same : "offset=3456401
> csize=107952 rsize=131093".  This rules out that the tablet server is
> simply reading data from the wrong location in the file.  However the
> tablet server failed when it tried to read the block at location
> 3456401.  This seems to show that sometimes the data can be read but
> other times it can not.
> 
> I modified the test program to lookup all rows that exist in the file.
> I pushed the changes to github.  This is more similar to what is
> happening on the tablet server.  You no longer need to pass a row # to
> the test program, just a file name.  Just run LargeRowDirectQuery
> <file>.  Can you try running this and see if an exception occurs?
> 
> Keith
> 
> On Mon, Apr 16, 2012 at 9:27 PM, Keys Botzum <kb...@maprtech.com> wrote:
>> Keith,
>> 
>> As requested I have run the new test you provided.
>> 
>> Here is the output from ./run.py -t large row -v 10 -d
>> console:
>> 
>> 
>> 
>> 
>> logs:
>> 
>> 
>> 
>> 
>> Here is the output from the new test program:
>> /opt/keith-turner-accumulo-1.4.0-MapR-630654d/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 61 /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28878/tables/2/
>> 
>> 16 18:10:08,112 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> 16 18:10:08,320 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
>> 16 18:10:08,324 [rfile.RelativeKey] DEBUG: entering fastSkip()
>> 16 18:10:08,332 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 16 18:10:08,332 [rfile.RelativeKey] DEBUG: len = 131072
>> 16 18:10:08,339 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
>> 16 18:10:08,339 [rfile.RelativeKey] DEBUG: len = 0
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data =
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: len = 0
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data =
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: len = 0
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data =
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: len = 2
>> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data = 61
>> top key : d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
>> 16 18:10:08,369 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
>> 16 18:10:08,376 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>> 16 18:10:08,376 [rfile.RelativeKey] DEBUG: len = 131072
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: len = 0
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data =
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: len = 0
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data =
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: len = 0
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data =
>> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
>> 
>> Is this what you expected? It seems rather brief. Should I run it differently? Out of curiosity I ran it using 60, 62, and 70 as input just to see what happens:
>> 
>> 60:
>> 
>> 16 18:13:34,076 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> 
>> 62:
>> 
>> 16 18:14:07,656 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> 
>> 70:
>> 
>> 16 18:15:49,995 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> 
>> 
>> Not sure what this means.
>> 
>> 
>> 
>> 
>> Note that I ran the auto test twice. The reason I did so is because the first run produced an error I've never seen before. Here's the error:
>> …. Verify Call Count 2
>> ….
>> Creating Range at row 96 initial bytes are: )L#W+688U29C81-#4okb#-liSsd[!MB7
>> key = )L#W+688U29C81-#4okb#-liSsd[!MB7VO;"*nv/1LN546++1Vu(`deul$`h08,Z... TRUNCATED : [] 1334170440096 false
>> Creating Range at row 97 initial bytes are: N`+y-M`C6(hjBsSoLB4xEhrU{x9+0UXL
>> key = N`+y-M`C6(hjBsSoLB4xEhrU{x9+0UXLgd%J!jUC"1"b#gMAJH-+.R(Z\JE=j+fK... TRUNCATED : [] 1334170440097 false
>> Creating Range at row 98 initial bytes are: ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
>> key = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440098 false
>> Creating Range at row 99 initial bytes are: '?[3F /4g=?/#-U3IFa;.n <('vgazD9
>> key = '?[3F /4g=?/#-U3IFa;.n <('vgazD9`DY96xKZsD0:H$)CS>Q%[TG()MKR'Y<M... TRUNCATED : [] 1334170440099 false
>> 16 17:58:24,809 [admin.TableOperations] INFO : Problem with metadata table, it has a hole / != * ... retrying ...
>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>> java.lang.reflect.InvocationTargetException
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.lang.Exception: # of table splits points out of range, #splits=10 table=lr min=1 max=9
>>        at org.apache.accumulo.server.test.functional.FunctionalTest.checkSplits(FunctionalTest.java:216)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test1(LargeRowTest.java:98)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:86)
>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>        ... 6 more
>> 
>> Here's the same sequence from the run that yields the "expected" error:
>> 
>> …. Verify Call Count 2
>> ….
>> DEBUG:test.auto:out: Creating Range at row 98 initial bytes are: ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
>> DEBUG:test.auto:out: key = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440098 false
>> DEBUG:test.auto:out: Creating Range at row 99 initial bytes are: '?[3F /4g=?/#-U3IFa;.n <('vgazD9
>> DEBUG:test.auto:out: key = '?[3F /4g=?/#-U3IFa;.n <('vgazD9`DY96xKZsD0:H$)CS>Q%[TG()MKR'Y<M... TRUNCATED : [] 1334170440099 false
>> DEBUG:test.auto:out: Verify Call Count 3
>> 
>> 
>> There is no corresponding error in the server logs that I can see but I've save them in case you are curious. I have the logs in case you are curious. This might be a spurious error but I include it just in case it gives you an idea.
>> 
>> In both runs there are numerous "incorrect header check" messages in the logs. That might be significant.
>> 
>> If you need anything, do not hesitate to ask.
>> 
>> Thanks again,
>> Keys
>> ________________________________
>> Keys Botzum
>> Senior Principal Technologist
>> WW Systems Engineering
>> kbotzum@maprtech.com
>> 443-718-0098
>> MapR Technologies
>> http://www.mapr.com
>> 
>> 
>> 
>> On Apr 16, 2012, at 6:32 PM, Keith Turner wrote:
>> 


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
The tablet server failed when it tried to do that read.  One thing of
interest is that the numbers match up.  In the test program and the
tablet server, the block offset and size is the same : "offset=3456401
csize=107952 rsize=131093".  This rules out that the tablet server is
simply reading data from the wrong location in the file.  However the
tablet server failed when it tried to read the block at location
3456401.  This seems to show that sometimes the data can be read but
other times it can not.

I modified the test program to lookup all rows that exist in the file.
 I pushed the changes to github.  This is more similar to what is
happening on the tablet server.  You no longer need to pass a row # to
the test program, just a file name.  Just run LargeRowDirectQuery
<file>.  Can you try running this and see if an exception occurs?

Keith

On Mon, Apr 16, 2012 at 9:27 PM, Keys Botzum <kb...@maprtech.com> wrote:
> Keith,
>
> As requested I have run the new test you provided.
>
> Here is the output from ./run.py -t large row -v 10 -d
> console:
>
>
>
>
> logs:
>
>
>
>
> Here is the output from the new test program:
> /opt/keith-turner-accumulo-1.4.0-MapR-630654d/bin$ ./accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery 61 /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28878/tables/2/
>
> 16 18:10:08,112 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
> 16 18:10:08,320 [rfile.RFile] DEBUG: Getting block offset=2916275 csize=108005 rsize=131093 entries=1 key=d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
> 16 18:10:08,324 [rfile.RelativeKey] DEBUG: entering fastSkip()
> 16 18:10:08,332 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 16 18:10:08,332 [rfile.RelativeKey] DEBUG: len = 131072
> 16 18:10:08,339 [rfile.RelativeKey] DEBUG: data = d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/
> 16 18:10:08,339 [rfile.RelativeKey] DEBUG: len = 0
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data =
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: len = 0
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data =
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: len = 0
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data =
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: Read ts 1334170440161
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: len = 2
> 16 18:10:08,340 [rfile.RelativeKey] DEBUG: data = 61
> top key : d_FH?P(Wl`V.D8A4Vyx=[L!u5V6"660/=O vON3r`>I]-J-y\f8dLv[?zmE30/"M... TRUNCATED : [] 1334170440161 false
> 16 18:10:08,369 [rfile.RFile] DEBUG: Getting block offset=3456401 csize=107952 rsize=131093 entries=1 key=eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic69c?.B@KmUDhLn?OmZ&$xgwTLc{x4if:... TRUNCATED : [] 1334170440132 false
> 16 18:10:08,376 [rfile.RelativeKey] DEBUG: fieldsSame = 0
> 16 18:10:08,376 [rfile.RelativeKey] DEBUG: len = 131072
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data = eZ'n/^E<5GmL<8k2\zVe;4P"3nC<%2Ic
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: len = 0
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data =
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: len = 0
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data =
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: len = 0
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: data =
> 16 18:10:08,378 [rfile.RelativeKey] DEBUG: Read ts 1334170440132
>
> Is this what you expected? It seems rather brief. Should I run it differently? Out of curiosity I ran it using 60, 62, and 70 as input just to see what happens:
>
> 60:
>
> 16 18:13:34,076 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>
> 62:
>
> 16 18:14:07,656 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>
> 70:
>
> 16 18:15:49,995 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>
>
> Not sure what this means.
>
>
>
>
> Note that I ran the auto test twice. The reason I did so is because the first run produced an error I've never seen before. Here's the error:
> …. Verify Call Count 2
> ….
> Creating Range at row 96 initial bytes are: )L#W+688U29C81-#4okb#-liSsd[!MB7
> key = )L#W+688U29C81-#4okb#-liSsd[!MB7VO;"*nv/1LN546++1Vu(`deul$`h08,Z... TRUNCATED : [] 1334170440096 false
> Creating Range at row 97 initial bytes are: N`+y-M`C6(hjBsSoLB4xEhrU{x9+0UXL
> key = N`+y-M`C6(hjBsSoLB4xEhrU{x9+0UXLgd%J!jUC"1"b#gMAJH-+.R(Z\JE=j+fK... TRUNCATED : [] 1334170440097 false
> Creating Range at row 98 initial bytes are: ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
> key = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440098 false
> Creating Range at row 99 initial bytes are: '?[3F /4g=?/#-U3IFa;.n <('vgazD9
> key = '?[3F /4g=?/#-U3IFa;.n <('vgazD9`DY96xKZsD0:H$)CS>Q%[TG()MKR'Y<M... TRUNCATED : [] 1334170440099 false
> 16 17:58:24,809 [admin.TableOperations] INFO : Problem with metadata table, it has a hole / != * ... retrying ...
> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
> java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.Exception: # of table splits points out of range, #splits=10 table=lr min=1 max=9
>        at org.apache.accumulo.server.test.functional.FunctionalTest.checkSplits(FunctionalTest.java:216)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.test1(LargeRowTest.java:98)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:86)
>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>        ... 6 more
>
> Here's the same sequence from the run that yields the "expected" error:
>
> …. Verify Call Count 2
> ….
> DEBUG:test.auto:out: Creating Range at row 98 initial bytes are: ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_M
> DEBUG:test.auto:out: key = ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334170440098 false
> DEBUG:test.auto:out: Creating Range at row 99 initial bytes are: '?[3F /4g=?/#-U3IFa;.n <('vgazD9
> DEBUG:test.auto:out: key = '?[3F /4g=?/#-U3IFa;.n <('vgazD9`DY96xKZsD0:H$)CS>Q%[TG()MKR'Y<M... TRUNCATED : [] 1334170440099 false
> DEBUG:test.auto:out: Verify Call Count 3
>
>
> There is no corresponding error in the server logs that I can see but I've save them in case you are curious. I have the logs in case you are curious. This might be a spurious error but I include it just in case it gives you an idea.
>
> In both runs there are numerous "incorrect header check" messages in the logs. That might be significant.
>
> If you need anything, do not hesitate to ask.
>
> Thanks again,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 16, 2012, at 6:32 PM, Keith Turner wrote:
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,

As requested I have run the new test you provided.

Here is the output from ./run.py -t large row -v 10 -d
console:



Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
I added a little program to the git repo that seek the rfiles directly
for a row.  I want to see if its possible to reproduce the problem
outside of the tablet server.   The program is called
LargeRowDirectQuery.  Looking at what you sent, the scan failed on the
61st generated row.  In the tablet server logs, I could see that it
was trying to read from
/user/mapr/accumulo-SE-test-04-15370/tables/2/t-0000007/F000000w.rf.
So you could run the following command after that test failed.

  accumulo org.apache.accumulo.server.test.functional.LargeRowDirectQuery
61 /user/mapr/accumulo-SE-test-04-15370/tables/2/t-0000007/F000000w.rf

Let me know how running this goes.

Keith

On Fri, Apr 13, 2012 at 8:22 PM, Keys Botzum <kb...@maprtech.com> wrote:
> Keith,
>
> Once again, thank you for your help. I appreciate your taking the time to create a debug version with more trace.
>
> Attached is everything I think you wanted:
>
> output from running ./run.py -t large row -v 10 -d
>
>
>
>
> contents of temporary log directory:
>
>
>
>
> If needed, I can provide whatever else you might want. As I'm using your build, compression is back on. If for some reason that makes it harder for you to debug this, let me know and I can run it again with compression off.
>
> Thanks,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 13, 2012, at 7:04 PM, Keith Turner wrote:
>
>> Keys
>>
>> I created a version of Accumulo 1.4.0 with more debugging for this
>> problem on github.  If you have changes, you can send me a pull
>> request.
>>
>>    https://github.com/keith-turner/accumulo-1.4.0-MapR
>>
>>
>> If you pull this down and run it should print info in the tablet
>> server and test.  I would really like to the see the Verify call count
>> that the test prints, because verify is called multiple times in the
>> test.  So far I do not know which one of these verify calls is failing
>> for you.
>>
>>    ./run.py -t largerow -v 10 -d
>>      .
>>      .
>>      .
>>    Verify Call Count 6
>>      .
>>      .
>>      .
>>    Creating Range at row 23 initial bytes are: YlX58$iWq'57eW:[cd@?@?OF.<GHgN )
>>    key = YlX58$iWq'57eW:[cd@?@?OF.<GHgN
>> )vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : [] 1334170440123
>> false
>>
>>
>> The above is the last scan for row YIX58..., with the added debugging
>> I can go to the tserver log and see the following info about this
>> read.  I can find when a scan was started for this range and see
>> everything that rfile did (except for index reads).
>>
>>    13 18:44:55,438 [tabletserver.TabletServer] DEBUG: Starting scan,
>> range= [YlX58$iWq'57eW:[cd@?@?OF.<GHgN
>> )vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : []
>> 9223372036854775807 false,YlX58$iWq'57eW:[cd@?@?OF.<GHgN
>> )vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : []
>> 9223372036854775807 false)
>>    13 18:44:55,465 [rfile.RFile] DEBUG: Getting block offset=6480397
>> csize=107994 rsize=131093 entries=1 key=YlX58$iWq'57eW:[cd@?@?OF.<GHgN
>> )vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : [] 1334170440123
>> false
>>    13 18:44:55,466 [rfile.RelativeKey] DEBUG: entering fastSkip()
>>    13 18:44:55,467 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>>    13 18:44:55,467 [rfile.RelativeKey] DEBUG: len = 131072
>>    13 18:44:55,469 [rfile.RelativeKey] DEBUG: data =
>> YlX58$iWq'57eW:[cd@?@?OF.<GHgN )
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 0
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data =
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 0
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data =
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 0
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data =
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: Read ts 1334170440123
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 2
>>    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data = 23
>>    13 18:44:55,472 [rfile.RFile] DEBUG: Getting block offset=6588391
>> csize=107991 rsize=131093 entries=1
>> key=Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s
>> :OKl:2"cp>]yT(ZePrtEh... TRUNCATED : [] 1334170440149 false
>>    13 18:44:55,476 [rfile.RelativeKey] DEBUG: fieldsSame = 0
>>    13 18:44:55,477 [rfile.RelativeKey] DEBUG: len = 131072
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
>> Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%T
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: len = 0
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: len = 0
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: len = 0
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
>>    13 18:44:55,478 [rfile.RelativeKey] DEBUG: Read ts 1334170440149
>>    13 18:44:55,479 [data.Value] DEBUG: len = 2
>>    13 18:44:55,479 [data.Value] DEBUG: val = 49
>>    13 18:44:55,479 [tabletserver.TabletServer] DEBUG: ScanSess tid
>> 144.51.26.32:63594 2 1 entries in 0.04 secs, nbTimes = [40 40 40.00 1]
>>
>> So maybe you can run this and we can see it what it looks like for the
>> failed scan.
>>
>> Keith
>
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,

Once again, thank you for your help. I appreciate your taking the time to create a debug version with more trace.

Attached is everything I think you wanted:

output from running ./run.py -t large row -v 10 -d


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
Keys

I created a version of Accumulo 1.4.0 with more debugging for this
problem on github.  If you have changes, you can send me a pull
request.

    https://github.com/keith-turner/accumulo-1.4.0-MapR


If you pull this down and run it should print info in the tablet
server and test.  I would really like to the see the Verify call count
that the test prints, because verify is called multiple times in the
test.  So far I do not know which one of these verify calls is failing
for you.

    ./run.py -t largerow -v 10 -d
      .
      .
      .
    Verify Call Count 6
      .
      .
      .
    Creating Range at row 23 initial bytes are: YlX58$iWq'57eW:[cd@?@?OF.<GHgN )
    key = YlX58$iWq'57eW:[cd@?@?OF.<GHgN
)vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : [] 1334170440123
false


The above is the last scan for row YIX58..., with the added debugging
I can go to the tserver log and see the following info about this
read.  I can find when a scan was started for this range and see
everything that rfile did (except for index reads).

    13 18:44:55,438 [tabletserver.TabletServer] DEBUG: Starting scan,
range= [YlX58$iWq'57eW:[cd@?@?OF.<GHgN
)vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : []
9223372036854775807 false,YlX58$iWq'57eW:[cd@?@?OF.<GHgN
)vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : []
9223372036854775807 false)
    13 18:44:55,465 [rfile.RFile] DEBUG: Getting block offset=6480397
csize=107994 rsize=131093 entries=1 key=YlX58$iWq'57eW:[cd@?@?OF.<GHgN
)vF2;h$?Ja%aO&]LNeFdTQQP/o1#)%t1W... TRUNCATED : [] 1334170440123
false
    13 18:44:55,466 [rfile.RelativeKey] DEBUG: entering fastSkip()
    13 18:44:55,467 [rfile.RelativeKey] DEBUG: fieldsSame = 0
    13 18:44:55,467 [rfile.RelativeKey] DEBUG: len = 131072
    13 18:44:55,469 [rfile.RelativeKey] DEBUG: data =
YlX58$iWq'57eW:[cd@?@?OF.<GHgN )
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 0
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data =
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 0
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data =
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 0
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data =
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: Read ts 1334170440123
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: len = 2
    13 18:44:55,470 [rfile.RelativeKey] DEBUG: data = 23
    13 18:44:55,472 [rfile.RFile] DEBUG: Getting block offset=6588391
csize=107991 rsize=131093 entries=1
key=Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s
:OKl:2"cp>]yT(ZePrtEh... TRUNCATED : [] 1334170440149 false
    13 18:44:55,476 [rfile.RelativeKey] DEBUG: fieldsSame = 0
    13 18:44:55,477 [rfile.RelativeKey] DEBUG: len = 131072
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%T
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: len = 0
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: len = 0
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: len = 0
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: data =
    13 18:44:55,478 [rfile.RelativeKey] DEBUG: Read ts 1334170440149
    13 18:44:55,479 [data.Value] DEBUG: len = 2
    13 18:44:55,479 [data.Value] DEBUG: val = 49
    13 18:44:55,479 [tabletserver.TabletServer] DEBUG: ScanSess tid
144.51.26.32:63594 2 1 entries in 0.04 secs, nbTimes = [40 40 40.00 1]

So maybe you can run this and we can see it what it looks like for the
failed scan.

Keith

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,


Once again, thanks for you help. Here for reference is the output of this command:
for RF in /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf; do ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF; done |grep 'false ->'|cut -c 1-64 |sort  > /tmp/sorted

As I happen to have a Mac, I used scp to copy that file to my Mac and ran md5 on it. The value was the same ( 91fae26d3b1d0ccc8b7d860a6bdb8385 ) as on the linux box.

I don't know if this is some very subtle difference in the way IO is redirected, sort works, or whatever. but hopefully the file I'm including looks like what you'd expect. Maybe diff will tell us something interesting.



Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
The md5sums of the individual files match.  I put mine inline below.
Odd that the overall md5sum did not match.  Wonder if sort is behaving
different on linux, seems unlikely.

I think a next might be to instrument the test w/ some debug to see
exactly which scan is failing.  Maybe print its range before it does
each scan.  I am curious if it always fails on the same scan.

Keith


On Thu, Apr 12, 2012 at 8:26 AM, Keys Botzum <kb...@maprtech.com> wrote:
> Keith,
>
> I've run the commands you requested. I hope this is helpful to you. By the way, the reason my path looks a little different is because I'm using MapR's NFS access which makes it a lot easier to get to files.
>
> $ ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28903/tables/1/default_tablet/F0000009.rf |grep 'false ->' |cut -c 1-64 | md5sum
> Setting continuous mode
> 2012-04-12 05:08:44,2798 Program: fileclient on Host: NULL IP: 0.0.0.0, Port: 0, PID: 0
> addcfa8442914899a998d38bbf917d67  -
>
> Looks like that's the result you expect.
>
> Now for the next command:
>
> mapr@SE-test-04:/opt/accumulo-1.4.0/bin$ for RF in /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf; do ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF; done |grep 'false ->'|cut -c 1-64 |sort |md5sum
> 91fae26d3b1d0ccc8b7d860a6bdb8385  -
>
>
> As you can see this result in very different. Just to make sure I also ran this command with the same result:
> $ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF; done |grep 'false ->'|cut -c 1-64 |sort |md5sum
>
> In case it might be useful I checksummed each file separately:
> $ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do echo $RF; ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF | grep 'false ->'|cut -c 1-64 | md5sum; done
> /user/mapr/accumulo-SE-test-04-28903/tables/2/default_tablet/F000000y.rf
> b0dda4e93f4fcc04a784dec8f8e9841d  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000000/F000000p.rf
> e0736ed51112529836253e8d0afb3253  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000001/F000000q.rf
> 008d03a0643cfdf83198c60ac9d45807  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000002/F000000s.rf
> 244f3bb7e61a30b4daed3aceb4efa7a1  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000003/F000000r.rf
> 7c8f4ff718b75051c4e6cb684689ca69  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000004/F000000t.rf
> e006faaee8f8a57c3285f7b77779d63b  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000005/F000000u.rf
> e914b210587161e71f398f325472e2bc  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000006/F000000v.rf
> 112f7d8dbe5b4cc9c971f7a7dfb56d9d  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000007/F000000w.rf
> df292cfffd3775a24b91fa56bd1f3d00  -
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000008/F000000x.rf
> a4a89e5cafbed9e89aca773f8f904b8f  -

/user/kturner/accumulo-mac1-5520/tables/2/default_tablet/F000000y.rf
b0dda4e93f4fcc04a784dec8f8e9841d
/user/kturner/accumulo-mac1-5520/tables/2/t-0000000/F000000p.rf
e0736ed51112529836253e8d0afb3253
/user/kturner/accumulo-mac1-5520/tables/2/t-0000001/F000000q.rf
008d03a0643cfdf83198c60ac9d45807
/user/kturner/accumulo-mac1-5520/tables/2/t-0000002/F000000s.rf
244f3bb7e61a30b4daed3aceb4efa7a1
/user/kturner/accumulo-mac1-5520/tables/2/t-0000003/F000000r.rf
7c8f4ff718b75051c4e6cb684689ca69
/user/kturner/accumulo-mac1-5520/tables/2/t-0000004/F000000t.rf
e006faaee8f8a57c3285f7b77779d63b
/user/kturner/accumulo-mac1-5520/tables/2/t-0000005/F000000u.rf
e914b210587161e71f398f325472e2bc
/user/kturner/accumulo-mac1-5520/tables/2/t-0000006/F000000v.rf
112f7d8dbe5b4cc9c971f7a7dfb56d9d
/user/kturner/accumulo-mac1-5520/tables/2/t-0000007/F000000w.rf
df292cfffd3775a24b91fa56bd1f3d00
/user/kturner/accumulo-mac1-5520/tables/2/t-0000008/F000000x.rf
a4a89e5cafbed9e89aca773f8f904b8f

>
> I also collected the first few lines of each file just to make sure the md5sum is on the right stuff. Does this look right to you?
> mapr@SE-test-04:/opt/accumulo-1.4.0/bin$ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do echo $RF; ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF | grep 'false ->'|head; done
> /user/mapr/accumulo-SE-test-04-28903/tables/2/default_tablet/F000000y.rf
> l0\w{QJV@&S.<B8bf`#^W9aa%1"1J*$8!f2GSrAA$oGVIWqVHO<j$OM&F.54TE77... TRUNCATED : [] 1334232102926 false -> 21
> lp*C*eLap:q6/'Ut/.+t.Wt;Jk%_H=^ yiBw-1q\,]RnC('d B3W'm..WP+8[9r)... TRUNCATED : [] 1334232103163 false -> 78
> n=b\`{B!Bdv:=E:GQAhb`U_d< 6>RX$p_pJ'Gh>%>/,uI":r&g60=`]U-MSr]$i ... TRUNCATED : [] 1334232102935 false -> 18
> o;z=U'`^gr{6Z]7z"hnc-U qiT= #/7.\!"6\jNqK EC*Y#(OCO!*$eWBv+[N$_L... TRUNCATED : [] 1334232103084 false -> 44
> rHwA9Q[MVg^Jnap-hIj(5q5/#Cd{@^^%S,!mQ`.;f(\Ws#K6.[`sB5lI(MuVB^(F... TRUNCATED : [] 1334232103068 false -> 41
> sF!!vX'8{uyE/1p 9pvA^];kP/*?m5gTP9_VfbPD+v7TTnSbC6SH/Uz2=v'^3ryX... TRUNCATED : [] 1334232103090 false -> 67
> uS]C[&m^!kwa04(m$Q=$S0T?gf/F-dEeFDE,ZX[;E"%q@$5c!N{&HfdB8:?r(PIP... TRUNCATED : [] 1334232103084 false -> 64
> uoa=PnI$MC6j=O!H['C- nKGB,Fawd:=JA#>$=%b=H]gFS!+]pSHEHM)'`"-!HW%... TRUNCATED : [] 1334232102935 false -> 7
> vQeKP./>FeKGML)TQxO-4yD:8RkHF:0`CQE8[YGrfp2B!o[aJ* M5*d9Mc'QuP;p... TRUNCATED : [] 1334232103168 false -> 90
> w AY5<o*;8 JRSY"FPI').'KYwPM774v@LU:t34%WPGUFzL];,OuJX3Bj7C'*;h)... TRUNCATED : [] 1334232102940 false -> 4
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000000/F000000p.rf
>  )zUKu6Q,6Xv0>]F.7N '6=&Y)R?>oAJ1qlv#, jX<$ZQj+pXW2DPNn\z`Q;\-:A... TRUNCATED : [] 1334232102998 false -> 53
> !8`u;;$^Q1X.(*NT48T4OesPWA#"-W1q*[W"^ Y,QfPw\Ebci7GMh>6+LV7Y`Si?... TRUNCATED : [] 1334232102883 false -> 24
> "6 =0CNVv+xzEB7+a_2U0f\m(ub:ZBao' $is"=,rXMD=4ps6op.U^BWa#k{Qg0S... TRUNCATED : [] 1334232102994 false -> 50
> #4xO%KdP?9aENYtH2>yKYR\jA`N_+ul4$-<I,$1l7JJez#>w_K%!V#B6.MoNBgfA... TRUNCATED : [] 1334232102998 false -> 76
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000001/F000000q.rf
> %C?ypYJCe e.EY+/K@0x$%cH>dsfc\ENf*arht;/1`.\*FmCq+C0moW=[W_gEECW... TRUNCATED : [] 1334232103096 false -> 47
> %] zG`VVmc:!p0^LT=5IwO{o3]V]54n#f4DxF[VnIdS_$e ce(M XqIp`JGG0QQG... TRUNCATED : [] 1334232102956 false -> 16
> &A]UQ`"1B/@ebqrVxgpUM%6$kPqL4GC_c7#6!v1\RR]Eg5=%>czOZ3Ucp$YK6EcQ... TRUNCATED : [] 1334232103096 false -> 73
> '?[3F /4g=?/#-U3IFa;.n <('vgazD9`DY96xKZsD0:H$)CS>Q%[TG()MKR'Y<M... TRUNCATED : [] 1334232103172 false -> 99
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000002/F000000s.rf
> (N_S6.V@0$H!wu,OOG9&VAhs&?M#PbNYYB[;qlZ0lZIdSG=/dzd%sDuUWC/Q?8jf... TRUNCATED : [] 1334232103049 false -> 70
> (ju{+.=^\XML(47D>f8l##aX]M5D>baB]?w/;QZ*d$c$m/TAD@.Bp&X6F!%)80zb... TRUNCATED : [] 1334232102883 false -> 13
> )L#W+688U29C81-#4okb#-liSsd[!MB7VO;"*nv/1LN546++1Vu(`deul$`h08,Z... TRUNCATED : [] 1334232103147 false -> 96
> )hc' 6-39g4%1K>2kEQ7LkS>v8[ak9NMZLAePSDF)rfdNfwzmx-z]FRr[J#.)0RM... TRUNCATED : [] 1334232103042 false -> 39
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000003/F000000r.rf
> *c.`lDdN^N-')KrZ)GUU,?W{t<=^Z5ucS6.Y/G+&",_]YAT!"XHg-7`D-@7'-jd'... TRUNCATED : [] 1334232102804 false -> 10
> +u?FaLr_'\blFc;SV&TbU?d?E(a;+hT*PWotDI+0CzLz:0f7K4m'vWXkBi+=z"'f... TRUNCATED : [] 1334232102804 false -> 36
> ,Yd_k`'$D'\w8H{ZfOGd+]Z+ibg%> MJMZF!gc&1Lh&J5]C<ln./xu >RCa+$r%l... TRUNCATED : [] 1334232103121 false -> 93
> ,s/bBSX)LV^0c{DI'M*2"+@@^[XOX?#IMdC(EJ'edl9Gw{LB`k2.cwD1W6_ak"EJ... TRUNCATED : [] 1334232102984 false -> 62
> .&8IEv-*rQ]OZgPK,N;!IZq1[s4\H;{eGNi=9?'Y^: `'B!z*LV!2hW<<A!Un\`(... TRUNCATED : [] 1334232102817 false -> 33
> .@;bx!k<z8$Lq=E1IL&Z@(3'Pl).bnNU3kgZ_%luv>wy!aftzI>\yjG0A4[0YTWz... TRUNCATED : [] 1334232102817 false -> 2
> /$J/&!EE;KK#c"cKY-@2*GSk,K)0amPADoXz:@94#,?Qd1=M?'A {,AIQjK._\h`... TRUNCATED : [] 1334232102984 false -> 59
> 0"<sw)5a`Y^.$::K>UxbSGb*E6;n2D],A JQOB['DfIME A*h_(f M3ff7Y)P\<?... TRUNCATED : [] 1334232103115 false -> 85
> 11&PgKK3=Aa"x:T9DV:j{vxtC:kC!@.::fM?.6t/=41?PWX&y?A^8=798-W^T:L4... TRUNCATED : [] 1334232102978 false -> 56
> 2/:^\SmDbOf=9R!Bq5E#HbDm\%3BNsA>7+FVC8P$^&&`1FJ(FwN,%]'AMVokENSf... TRUNCATED : [] 1334232103106 false -> 82
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000004/F000000t.rf
> BDF+!ccFzK!=:iq;p*?zAA5[#@<'DI,&72Ps93/U.a$e?,r)61I_N>k\(g9VF.F1... TRUNCATED : [] 1334232103090 false -> 57
> C?WNm)i5C2,V2U"Kv+1`!)0F!X5x3EU%0xC$t'H<;08AJOD2G%o{f./ V]#,Jh.;... TRUNCATED : [] 1334232102946 false -> 28
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000005/F000000u.rf
> DQ@oN0?/h@YcO%TcGR[<JqXZ:/H+Lw8L-=$@-)HDH!U]+>>@\H?QgOggk*?#;h`1... TRUNCATED : [] 1334232103068 false -> 54
> EOd"C8I>1NNKX<!e,2J"s]<NSvx\yNg;*Jd>B+Dcio^(h-TB)$@$T[6a$S+;, _r... TRUNCATED : [] 1334232103163 false -> 80
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000006/F000000v.rf
> GJj<2FG[W5^%O(ON13]6>0cIPziuiJy-$44$"{v^c)r9,e $;`&3#`\^Q]gg/F;#... TRUNCATED : [] 1334232103024 false -> 51
> Gx@SyNCp_x7xz[JbN1N\5ZO?E+5P'!:3lQXn\b]N{Ax=n'OsC^TyjbF.VP[[vRu[... TRUNCATED : [] 1334232102846 false -> 20
> H\U)'N'_4DYfl@:o^Z1^g0bw!fOg:!> !U3c7!,#(we=US8`d<aGl$>Df*I4 Z$p... TRUNCATED : [] 1334232103133 false -> 77
> J)EU]dIpNmvV2_s,$m.T=.9<oc#UDP-TcH^\QXI6:I{k[9zR"vI3&s5a<'%N"1fd... TRUNCATED : [] 1334232102846 false -> 17
> JUNohx3g"9yD%DX;I;2tpL<68U'[VP6wsL*V+s=#B7FiUfx6BT_%)5*FMIS6)9N_... TRUNCATED : [] 1334232103036 false -> 74
> K'[F>#BG+ iL;.{\Q8J#fvk-,N1"q'*,`ieiRY;E[;Bj<(.A7Q8_o7[\QPg<[DcF... TRUNCATED : [] 1334232103018 false -> 43
> L"R:B2!WPc&T3vn*k::'FJ3+*R'0`#-vYSFaEN^^TQkiG__ZH1w`+'e#7Gk4s#-t... TRUNCATED : [] 1334232102858 false -> 14
> M4Cf#97"u]fFP2#d<u=8oJ$BW>Gv1V0gV`RRFOf[uCd0(N]nqiD],HAfLpG3d#[M... TRUNCATED : [] 1334232103004 false -> 40
> N2[#tA9K>koomJ,Ti@_S<6,9p)o%J,#hS%c%[Q"(:5]_e=/w>EUXuTX[a=c@U#Ap... TRUNCATED : [] 1334232103036 false -> 66
> N`+y-M`C6(hjBsSoLB4xEhrU{x9+0UXLgd%J!jUC"1"b#gMAJH-+.R(Z\JE=j+fK... TRUNCATED : [] 1334232103147 false -> 97
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000007/F000000w.rf
> \wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334232103049 false -> 46
> ]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334232103063 false -> 72
> ]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334232102899 false -> 15
> ^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334232103158 false -> 98
> _B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334232102899 false -> 12
> `&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334232103063 false -> 69
> `T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334232102912 false -> 38
> a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334232103158 false -> 95
> bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334232102912 false -> 9
> c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334232103153 false -> 92
> /user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000008/F000000x.rf
> iennd4L\IJ{GYSC^s&b\X;kDb4oco8!N*QPL8:^0MO"+v'GSuN>4!<TbGc#<u[td... TRUNCATED : [] 1334232103004 false -> 55
> jc$DE;4v&E%IvkvdDa;,9<:h3 =M@kfk'^va9;x0Z@!!W^JyB*!=j\bc\0INf[$L... TRUNCATED : [] 1334232103126 false -> 81
>
>
>
> This doesn't mean much to me, but based on your earlier point I think this is an issue. What else can I gather for you?
>
> Thanks,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 11, 2012, at 1:04 PM, Keith Turner wrote:
>
>> I generated the hash for the second table, and its the same as the
>> first.  Makes sense, its the same data just split differently.  The
>> reason I did the sort is because there are multiple files.  Depending
>> on where the test fails, the second table may or may not have data.
>>
>> $ for RF in `hadoop fs -ls
>> /user/kturner/accumulo-mac1-5520/tables/2/*/*.rf | awk '{print $NF}'`;
>> do ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
>> -d $RF; done | grep 'false ->' | cut -c 1-64 | sort | md5
>> addcfa8442914899a998d38bbf917d67
>>
>> Keith
>>
>> On Wed, Apr 11, 2012 at 12:35 PM, Keith Turner <ke...@deenlo.com> wrote:
>>> Keys
>>>
>>> This test uses a random number generator w/ a seed, so the test should
>>> always generate the same data.  I ran the test twice in dirty mode and
>>> then generated an md5 hash of the data.  Both times the hash was the
>>> same.  Can you try to do this and see if you get the same hash.
>>>
>>> $ ./run.py -d -t largerow
>>> $ ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
>>> -d /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
>>> | grep 'false ->' | cut -c 1-64 | md5
>>> addcfa8442914899a998d38bbf917d67
>>>
>>> I did the grep to only get key values, the print info command prints
>>> some summary info.  I did the cut inorder to just get row data,
>>> including the timestamp would make the md5sum always change. I ran
>>> this on a mac, I think on linux you will need to run md5sum.
>>>
>>> The test creates two tables.  The md5 is for first table which seems
>>> to have just one file.  I am seeing multiple files in the second
>>> table.  I will put together a command to md5sum the second table and
>>> send that shortly.
>>>
>>> $ hadoop fs -lsr /user/kturner/accumulo-mac1-5520/tables/1
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet
>>> -rw-r--r--   3 kturner supergroup   32617935 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000a
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000b
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000c
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000d
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000e
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000f
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000g
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000h
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000i
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000j
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000k
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000l
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000m
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000n
>>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
>>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000o
>>>
>>> Keith
>>>
>>> On Wed, Apr 11, 2012 at 9:48 AM, Keys Botzum <kb...@maprtech.com> wrote:
>>>> Keith,
>>>>
>>>> Thanks for the suggestion.  I made the change to the source as you suggested and rebuilt it using Maven (surprisingly easy).
>>>>
>>>> Here's the log from tserver now. Does this help at all? I can of course provide the complete log or logs if useful to you. I can also provide the temporary tables and such if that's useful.
>>>>
>>>>
>>>> 10 15:44:07,786 [cache.LruBlockCache] DEBUG: Block cache LRU eviction completed. Freed 2494168 bytes.  Priority Sizes: Single=3.2550507MB (3413168), Multi=1
>>>> 3.89547MB (14570456),Memory=0.0MB (0)
>>>> 10 15:44:07,798 [rfile.RelativeKey] DEBUG: len : 131072
>>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s :OKl:2"cp>]yT(ZeP
>>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>>> 10 15:44:07,799 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:52572 2 1 entries in 0.03 secs, nbTimes = [25 25 25.00 1]
>>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 65
>>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.tP"RsUOI
>>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 47
>>>> 10 15:44:07,833 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-SE-test-04-18004/tables/2/t-0000000/F000000p.rf
>>>> 10 15:44:07,834 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... T
>>>> RUNCATED<
>>>> java.io.EOFException
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:381)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:135)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>
>>>>
>>>> It appears to be attempting to read 47 bytes but isn't succeeding. Out of curiosity I changed the code to read what it could and print a warning. Here's the new code version:
>>>>
>>>>
>>>> And this is a snippet of the exception which occurs with that change. Everything else is the same. As you can see my hack gets us past the read of the key, but then the next read fails.
>>>>
>>>> 11 06:42:32,254 [rfile.RelativeKey] DEBUG: data :
>>>> 11 06:42:32,254 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:47993 2 1 entries in 0.02
>>>>  secs, nbTimes = [23 23 23.00 1]
>>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 65
>>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.t
>>>> P"RsUOI
>>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 47
>>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: MISSING BYTES!!: read 45
>>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : )vRS>4 ?c>$Sgn#[QcscA!HAYcF;M_Jg3d&Jzc85$)6Y7^@^@
>>>> 11 06:42:32,288 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-
>>>> SE-test-04-32318/tables/2/t-0000000/F000000q.rf
>>>> 11 06:42:32,289 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\
>>>> ;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>>> java.io.EOFException
>>>>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>>>        at org.apache.accumulo.core.data.Value.readFields(Value.java:156)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:585)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.j
>>>>       …..
>>>>
>>>> So it looks like we are missing quite a bit of data.
>>>>
>>>> Any help or ideas appreciated.
>>>>
>>>> Thanks,
>>>> Keys
>>>> ________________________________
>>>> Keys Botzum
>>>> Senior Principal Technologist
>>>> WW Systems Engineering
>>>> kbotzum@maprtech.com
>>>> 443-718-0098
>>>> MapR Technologies
>>>> http://www.mapr.com
>>>>
>>>>
>>>>
>>>> On Apr 10, 2012, at 5:23 PM, Keith Turner wrote:
>>>>
>>>>> Keys,
>>>>>
>>>>> Looking at the test, it writes out random rows that 128k in len.  The
>>>>> column family and column qualifier it writes out are 0 bytes long.
>>>>> When the non compression test failed, it was trying to read a column
>>>>> qualifier.  If we assume that it was reading a column qualifier from
>>>>> the test table then it should be calling readFully() with a zero
>>>>> length array.
>>>>>
>>>>> Trying to think how to debug this.  One way may be to change the code
>>>>> in RelativeKey to the following and run the test.  This will show us
>>>>> what its trying to do right before it hits the eof, but it will also
>>>>> generate a lot of noise as things scan the metadata table.
>>>>>
>>>>>  private byte[] read(DataInput in) throws IOException {
>>>>>    int len = WritableUtils.readVInt(in);
>>>>>    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
>>>>>    byte[] data = new byte[len];
>>>>>    in.readFully(data);
>>>>>    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
>>>>> new String(data).substring(0, Math.min(data.length, 60)));
>>>>>    return data;
>>>>>  }
>>>>>
>>>>> Keith
>>>>>
>>>>> On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kb...@maprtech.com> wrote:
>>>>>> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>>>>>>
>>>>>> When I run it, this is the output I see:
>>>>>> ./run.py -t largerowtest -d -v10
>>>>>> ….
>>>>>> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
>>>>>> DEBUG:test.auto:{
>>>>>> 'tserver.compaction.major.delay':'1',
>>>>>> }
>>>>>>
>>>>>> DEBUG:test.auto:
>>>>>> INFO:test.auto:killing accumulo processes everywhere
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
>>>>>> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
>>>>>> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
>>>>>> DEBUG:test.auto:Exit code: 255
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
>>>>>> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
>>>>>> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
>>>>>> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
>>>>>> Instance name : SE-test-04-22187
>>>>>> Enter initial password for root: ******
>>>>>> Confirm initial password for root: ******
>>>>>> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>>>>>> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
>>>>>> DEBUG:test.auto:Exit code: 0
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
>>>>>> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
>>>>>> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
>>>>>> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
>>>>>> DEBUG:test.auto:Exit code: 0
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
>>>>>> DEBUG:test.auto:
>>>>>> DEBUG:test.auto:
>>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
>>>>>> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
>>>>>> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>>>> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>> DEBUG:test.auto:err:
>>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>>>        ... 6 more
>>>>>> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>>>        ... 11 more
>>>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>> DEBUG:test.auto:err:
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>>>        at $Proxy1.startScan(Unknown Source)
>>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>>>        ... 13 more
>>>>>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>>>> java.lang.reflect.InvocationTargetException
>>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>>>        ... 6 more
>>>>>> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>>>        ... 11 more
>>>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>>>        at $Proxy1.startScan(Unknown Source)
>>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>>>        ... 13 more
>>>>>>
>>>>>> FAIL
>>>>>> ======================================================================
>>>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>>>> ----------------------------------------------------------------------
>>>>>> Traceback (most recent call last):
>>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>>>> AssertionError: False is not true
>>>>>>
>>>>>>
>>>>>> ======================================================================
>>>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>>>> ----------------------------------------------------------------------
>>>>>> Traceback (most recent call last):
>>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>>>> AssertionError: False is not true
>>>>>>
>>>>>> ----------------------------------------------------------------------
>>>>>> Ran 1 test in 43.014s
>>>>>>
>>>>>> FAILED (failures=1)
>>>>>>
>>>>>>
>>>>>> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
>>>>>> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>>>>>>
>>>>>>
>>>>>> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>>>> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
>>>>>> java.io.IOException: invalid distance too far back
>>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>>> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>>        ... 15 more
>>>>>> Caused by: java.io.IOException: invalid distance too far back
>>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        ... 1 more
>>>>>>
>>>>>>
>>>>>> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>>>>>>
>>>>>> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>>>>>>
>>>>>> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
>>>>>> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
>>>>>> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>>>> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>>>>> java.io.EOFException
>>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>>        ... 15 more
>>>>>> Caused by: java.io.EOFException
>>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>>        ... 15 more
>>>>>> Caused by: java.io.EOFException
>>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>        ... 1 more
>>>>>>
>>>>>>
>>>>>> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
>>>>>> 1) the file was originally written incorrectly by the writer,
>>>>>> 2) the reader is reading too far
>>>>>>
>>>>>> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>>>>>>
>>>>>> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>>>>>>
>>>>>> Thanks,
>>>>>> Keys
>>>>>> ________________________________
>>>>>> Keys Botzum
>>>>>> Senior Principal Technologist
>>>>>> WW Systems Engineering
>>>>>> kbotzum@maprtech.com
>>>>>> 443-718-0098
>>>>>> MapR Technologies
>>>>>> http://www.mapr.com
>>>>
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,

I've run the commands you requested. I hope this is helpful to you. By the way, the reason my path looks a little different is because I'm using MapR's NFS access which makes it a lot easier to get to files.

$ ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28903/tables/1/default_tablet/F0000009.rf |grep 'false ->' |cut -c 1-64 | md5sum
Setting continuous mode
2012-04-12 05:08:44,2798 Program: fileclient on Host: NULL IP: 0.0.0.0, Port: 0, PID: 0
addcfa8442914899a998d38bbf917d67  -

Looks like that's the result you expect.

Now for the next command:

mapr@SE-test-04:/opt/accumulo-1.4.0/bin$ for RF in /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf; do ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF; done |grep 'false ->'|cut -c 1-64 |sort |md5sum
91fae26d3b1d0ccc8b7d860a6bdb8385  -


As you can see this result in very different. Just to make sure I also ran this command with the same result:
$ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF; done |grep 'false ->'|cut -c 1-64 |sort |md5sum

In case it might be useful I checksummed each file separately:
$ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do echo $RF; ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF | grep 'false ->'|cut -c 1-64 | md5sum; done
/user/mapr/accumulo-SE-test-04-28903/tables/2/default_tablet/F000000y.rf
b0dda4e93f4fcc04a784dec8f8e9841d  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000000/F000000p.rf
e0736ed51112529836253e8d0afb3253  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000001/F000000q.rf
008d03a0643cfdf83198c60ac9d45807  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000002/F000000s.rf
244f3bb7e61a30b4daed3aceb4efa7a1  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000003/F000000r.rf
7c8f4ff718b75051c4e6cb684689ca69  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000004/F000000t.rf
e006faaee8f8a57c3285f7b77779d63b  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000005/F000000u.rf
e914b210587161e71f398f325472e2bc  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000006/F000000v.rf
112f7d8dbe5b4cc9c971f7a7dfb56d9d  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000007/F000000w.rf
df292cfffd3775a24b91fa56bd1f3d00  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000008/F000000x.rf
a4a89e5cafbed9e89aca773f8f904b8f  -

I also collected the first few lines of each file just to make sure the md5sum is on the right stuff. Does this look right to you?
mapr@SE-test-04:/opt/accumulo-1.4.0/bin$ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do echo $RF; ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF | grep 'false ->'|head; done
/user/mapr/accumulo-SE-test-04-28903/tables/2/default_tablet/F000000y.rf
l0\w{QJV@&S.<B8bf`#^W9aa%1"1J*$8!f2GSrAA$oGVIWqVHO<j$OM&F.54TE77... TRUNCATED : [] 1334232102926 false -> 21
lp*C*eLap:q6/'Ut/.+t.Wt;Jk%_H=^ yiBw-1q\,]RnC('d B3W'm..WP+8[9r)... TRUNCATED : [] 1334232103163 false -> 78
n=b\`{B!Bdv:=E:GQAhb`U_d< 6>RX$p_pJ'Gh>%>/,uI":r&g60=`]U-MSr]$i ... TRUNCATED : [] 1334232102935 false -> 18
o;z=U'`^gr{6Z]7z"hnc-U qiT= #/7.\!"6\jNqK EC*Y#(OCO!*$eWBv+[N$_L... TRUNCATED : [] 1334232103084 false -> 44
rHwA9Q[MVg^Jnap-hIj(5q5/#Cd{@^^%S,!mQ`.;f(\Ws#K6.[`sB5lI(MuVB^(F... TRUNCATED : [] 1334232103068 false -> 41
sF!!vX'8{uyE/1p 9pvA^];kP/*?m5gTP9_VfbPD+v7TTnSbC6SH/Uz2=v'^3ryX... TRUNCATED : [] 1334232103090 false -> 67
uS]C[&m^!kwa04(m$Q=$S0T?gf/F-dEeFDE,ZX[;E"%q@$5c!N{&HfdB8:?r(PIP... TRUNCATED : [] 1334232103084 false -> 64
uoa=PnI$MC6j=O!H['C- nKGB,Fawd:=JA#>$=%b=H]gFS!+]pSHEHM)'`"-!HW%... TRUNCATED : [] 1334232102935 false -> 7
vQeKP./>FeKGML)TQxO-4yD:8RkHF:0`CQE8[YGrfp2B!o[aJ* M5*d9Mc'QuP;p... TRUNCATED : [] 1334232103168 false -> 90
w AY5<o*;8 JRSY"FPI').'KYwPM774v@LU:t34%WPGUFzL];,OuJX3Bj7C'*;h)... TRUNCATED : [] 1334232102940 false -> 4
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000000/F000000p.rf
 )zUKu6Q,6Xv0>]F.7N '6=&Y)R?>oAJ1qlv#, jX<$ZQj+pXW2DPNn\z`Q;\-:A... TRUNCATED : [] 1334232102998 false -> 53
!8`u;;$^Q1X.(*NT48T4OesPWA#"-W1q*[W"^ Y,QfPw\Ebci7GMh>6+LV7Y`Si?... TRUNCATED : [] 1334232102883 false -> 24
"6 =0CNVv+xzEB7+a_2U0f\m(ub:ZBao' $is"=,rXMD=4ps6op.U^BWa#k{Qg0S... TRUNCATED : [] 1334232102994 false -> 50
#4xO%KdP?9aENYtH2>yKYR\jA`N_+ul4$-<I,$1l7JJez#>w_K%!V#B6.MoNBgfA... TRUNCATED : [] 1334232102998 false -> 76
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000001/F000000q.rf
%C?ypYJCe e.EY+/K@0x$%cH>dsfc\ENf*arht;/1`.\*FmCq+C0moW=[W_gEECW... TRUNCATED : [] 1334232103096 false -> 47
%] zG`VVmc:!p0^LT=5IwO{o3]V]54n#f4DxF[VnIdS_$e ce(M XqIp`JGG0QQG... TRUNCATED : [] 1334232102956 false -> 16
&A]UQ`"1B/@ebqrVxgpUM%6$kPqL4GC_c7#6!v1\RR]Eg5=%>czOZ3Ucp$YK6EcQ... TRUNCATED : [] 1334232103096 false -> 73
'?[3F /4g=?/#-U3IFa;.n <('vgazD9`DY96xKZsD0:H$)CS>Q%[TG()MKR'Y<M... TRUNCATED : [] 1334232103172 false -> 99
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000002/F000000s.rf
(N_S6.V@0$H!wu,OOG9&VAhs&?M#PbNYYB[;qlZ0lZIdSG=/dzd%sDuUWC/Q?8jf... TRUNCATED : [] 1334232103049 false -> 70
(ju{+.=^\XML(47D>f8l##aX]M5D>baB]?w/;QZ*d$c$m/TAD@.Bp&X6F!%)80zb... TRUNCATED : [] 1334232102883 false -> 13
)L#W+688U29C81-#4okb#-liSsd[!MB7VO;"*nv/1LN546++1Vu(`deul$`h08,Z... TRUNCATED : [] 1334232103147 false -> 96
)hc' 6-39g4%1K>2kEQ7LkS>v8[ak9NMZLAePSDF)rfdNfwzmx-z]FRr[J#.)0RM... TRUNCATED : [] 1334232103042 false -> 39
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000003/F000000r.rf
*c.`lDdN^N-')KrZ)GUU,?W{t<=^Z5ucS6.Y/G+&",_]YAT!"XHg-7`D-@7'-jd'... TRUNCATED : [] 1334232102804 false -> 10
+u?FaLr_'\blFc;SV&TbU?d?E(a;+hT*PWotDI+0CzLz:0f7K4m'vWXkBi+=z"'f... TRUNCATED : [] 1334232102804 false -> 36
,Yd_k`'$D'\w8H{ZfOGd+]Z+ibg%> MJMZF!gc&1Lh&J5]C<ln./xu >RCa+$r%l... TRUNCATED : [] 1334232103121 false -> 93
,s/bBSX)LV^0c{DI'M*2"+@@^[XOX?#IMdC(EJ'edl9Gw{LB`k2.cwD1W6_ak"EJ... TRUNCATED : [] 1334232102984 false -> 62
.&8IEv-*rQ]OZgPK,N;!IZq1[s4\H;{eGNi=9?'Y^: `'B!z*LV!2hW<<A!Un\`(... TRUNCATED : [] 1334232102817 false -> 33
.@;bx!k<z8$Lq=E1IL&Z@(3'Pl).bnNU3kgZ_%luv>wy!aftzI>\yjG0A4[0YTWz... TRUNCATED : [] 1334232102817 false -> 2
/$J/&!EE;KK#c"cKY-@2*GSk,K)0amPADoXz:@94#,?Qd1=M?'A {,AIQjK._\h`... TRUNCATED : [] 1334232102984 false -> 59
0"<sw)5a`Y^.$::K>UxbSGb*E6;n2D],A JQOB['DfIME A*h_(f M3ff7Y)P\<?... TRUNCATED : [] 1334232103115 false -> 85
11&PgKK3=Aa"x:T9DV:j{vxtC:kC!@.::fM?.6t/=41?PWX&y?A^8=798-W^T:L4... TRUNCATED : [] 1334232102978 false -> 56
2/:^\SmDbOf=9R!Bq5E#HbDm\%3BNsA>7+FVC8P$^&&`1FJ(FwN,%]'AMVokENSf... TRUNCATED : [] 1334232103106 false -> 82
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000004/F000000t.rf
BDF+!ccFzK!=:iq;p*?zAA5[#@<'DI,&72Ps93/U.a$e?,r)61I_N>k\(g9VF.F1... TRUNCATED : [] 1334232103090 false -> 57
C?WNm)i5C2,V2U"Kv+1`!)0F!X5x3EU%0xC$t'H<;08AJOD2G%o{f./ V]#,Jh.;... TRUNCATED : [] 1334232102946 false -> 28
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000005/F000000u.rf
DQ@oN0?/h@YcO%TcGR[<JqXZ:/H+Lw8L-=$@-)HDH!U]+>>@\H?QgOggk*?#;h`1... TRUNCATED : [] 1334232103068 false -> 54
EOd"C8I>1NNKX<!e,2J"s]<NSvx\yNg;*Jd>B+Dcio^(h-TB)$@$T[6a$S+;, _r... TRUNCATED : [] 1334232103163 false -> 80
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000006/F000000v.rf
GJj<2FG[W5^%O(ON13]6>0cIPziuiJy-$44$"{v^c)r9,e $;`&3#`\^Q]gg/F;#... TRUNCATED : [] 1334232103024 false -> 51
Gx@SyNCp_x7xz[JbN1N\5ZO?E+5P'!:3lQXn\b]N{Ax=n'OsC^TyjbF.VP[[vRu[... TRUNCATED : [] 1334232102846 false -> 20
H\U)'N'_4DYfl@:o^Z1^g0bw!fOg:!> !U3c7!,#(we=US8`d<aGl$>Df*I4 Z$p... TRUNCATED : [] 1334232103133 false -> 77
J)EU]dIpNmvV2_s,$m.T=.9<oc#UDP-TcH^\QXI6:I{k[9zR"vI3&s5a<'%N"1fd... TRUNCATED : [] 1334232102846 false -> 17
JUNohx3g"9yD%DX;I;2tpL<68U'[VP6wsL*V+s=#B7FiUfx6BT_%)5*FMIS6)9N_... TRUNCATED : [] 1334232103036 false -> 74
K'[F>#BG+ iL;.{\Q8J#fvk-,N1"q'*,`ieiRY;E[;Bj<(.A7Q8_o7[\QPg<[DcF... TRUNCATED : [] 1334232103018 false -> 43
L"R:B2!WPc&T3vn*k::'FJ3+*R'0`#-vYSFaEN^^TQkiG__ZH1w`+'e#7Gk4s#-t... TRUNCATED : [] 1334232102858 false -> 14
M4Cf#97"u]fFP2#d<u=8oJ$BW>Gv1V0gV`RRFOf[uCd0(N]nqiD],HAfLpG3d#[M... TRUNCATED : [] 1334232103004 false -> 40
N2[#tA9K>koomJ,Ti@_S<6,9p)o%J,#hS%c%[Q"(:5]_e=/w>EUXuTX[a=c@U#Ap... TRUNCATED : [] 1334232103036 false -> 66
N`+y-M`C6(hjBsSoLB4xEhrU{x9+0UXLgd%J!jUC"1"b#gMAJH-+.R(Z\JE=j+fK... TRUNCATED : [] 1334232103147 false -> 97
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000007/F000000w.rf
\wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334232103049 false -> 46
]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334232103063 false -> 72
]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334232102899 false -> 15
^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334232103158 false -> 98
_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334232102899 false -> 12
`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334232103063 false -> 69
`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334232102912 false -> 38
a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334232103158 false -> 95
bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334232102912 false -> 9
c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334232103153 false -> 92
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000008/F000000x.rf
iennd4L\IJ{GYSC^s&b\X;kDb4oco8!N*QPL8:^0MO"+v'GSuN>4!<TbGc#<u[td... TRUNCATED : [] 1334232103004 false -> 55
jc$DE;4v&E%IvkvdDa;,9<:h3 =M@kfk'^va9;x0Z@!!W^JyB*!=j\bc\0INf[$L... TRUNCATED : [] 1334232103126 false -> 81



This doesn't mean much to me, but based on your earlier point I think this is an issue. What else can I gather for you?

Thanks,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 11, 2012, at 1:04 PM, Keith Turner wrote:

> I generated the hash for the second table, and its the same as the
> first.  Makes sense, its the same data just split differently.  The
> reason I did the sort is because there are multiple files.  Depending
> on where the test fails, the second table may or may not have data.
> 
> $ for RF in `hadoop fs -ls
> /user/kturner/accumulo-mac1-5520/tables/2/*/*.rf | awk '{print $NF}'`;
> do ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
> -d $RF; done | grep 'false ->' | cut -c 1-64 | sort | md5
> addcfa8442914899a998d38bbf917d67
> 
> Keith
> 
> On Wed, Apr 11, 2012 at 12:35 PM, Keith Turner <ke...@deenlo.com> wrote:
>> Keys
>> 
>> This test uses a random number generator w/ a seed, so the test should
>> always generate the same data.  I ran the test twice in dirty mode and
>> then generated an md5 hash of the data.  Both times the hash was the
>> same.  Can you try to do this and see if you get the same hash.
>> 
>> $ ./run.py -d -t largerow
>> $ ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
>> -d /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
>> | grep 'false ->' | cut -c 1-64 | md5
>> addcfa8442914899a998d38bbf917d67
>> 
>> I did the grep to only get key values, the print info command prints
>> some summary info.  I did the cut inorder to just get row data,
>> including the timestamp would make the md5sum always change. I ran
>> this on a mac, I think on linux you will need to run md5sum.
>> 
>> The test creates two tables.  The md5 is for first table which seems
>> to have just one file.  I am seeing multiple files in the second
>> table.  I will put together a command to md5sum the second table and
>> send that shortly.
>> 
>> $ hadoop fs -lsr /user/kturner/accumulo-mac1-5520/tables/1
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet
>> -rw-r--r--   3 kturner supergroup   32617935 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000a
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000b
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000c
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000d
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000e
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000f
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000g
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000h
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000i
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000j
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000k
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000l
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000m
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000n
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000o
>> 
>> Keith
>> 
>> On Wed, Apr 11, 2012 at 9:48 AM, Keys Botzum <kb...@maprtech.com> wrote:
>>> Keith,
>>> 
>>> Thanks for the suggestion.  I made the change to the source as you suggested and rebuilt it using Maven (surprisingly easy).
>>> 
>>> Here's the log from tserver now. Does this help at all? I can of course provide the complete log or logs if useful to you. I can also provide the temporary tables and such if that's useful.
>>> 
>>> 
>>> 10 15:44:07,786 [cache.LruBlockCache] DEBUG: Block cache LRU eviction completed. Freed 2494168 bytes.  Priority Sizes: Single=3.2550507MB (3413168), Multi=1
>>> 3.89547MB (14570456),Memory=0.0MB (0)
>>> 10 15:44:07,798 [rfile.RelativeKey] DEBUG: len : 131072
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s :OKl:2"cp>]yT(ZeP
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>> 10 15:44:07,799 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:52572 2 1 entries in 0.03 secs, nbTimes = [25 25 25.00 1]
>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 65
>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.tP"RsUOI
>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 47
>>> 10 15:44:07,833 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-SE-test-04-18004/tables/2/t-0000000/F000000p.rf
>>> 10 15:44:07,834 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... T
>>> RUNCATED<
>>> java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:381)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:135)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 
>>> 
>>> It appears to be attempting to read 47 bytes but isn't succeeding. Out of curiosity I changed the code to read what it could and print a warning. Here's the new code version:
>>> 
>>> 
>>> And this is a snippet of the exception which occurs with that change. Everything else is the same. As you can see my hack gets us past the read of the key, but then the next read fails.
>>> 
>>> 11 06:42:32,254 [rfile.RelativeKey] DEBUG: data :
>>> 11 06:42:32,254 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:47993 2 1 entries in 0.02
>>>  secs, nbTimes = [23 23 23.00 1]
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 65
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.t
>>> P"RsUOI
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 47
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: MISSING BYTES!!: read 45
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : )vRS>4 ?c>$Sgn#[QcscA!HAYcF;M_Jg3d&Jzc85$)6Y7^@^@
>>> 11 06:42:32,288 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-
>>> SE-test-04-32318/tables/2/t-0000000/F000000q.rf
>>> 11 06:42:32,289 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\
>>> ;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>> java.io.EOFException
>>>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>>        at org.apache.accumulo.core.data.Value.readFields(Value.java:156)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:585)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.j
>>>       …..
>>> 
>>> So it looks like we are missing quite a bit of data.
>>> 
>>> Any help or ideas appreciated.
>>> 
>>> Thanks,
>>> Keys
>>> ________________________________
>>> Keys Botzum
>>> Senior Principal Technologist
>>> WW Systems Engineering
>>> kbotzum@maprtech.com
>>> 443-718-0098
>>> MapR Technologies
>>> http://www.mapr.com
>>> 
>>> 
>>> 
>>> On Apr 10, 2012, at 5:23 PM, Keith Turner wrote:
>>> 
>>>> Keys,
>>>> 
>>>> Looking at the test, it writes out random rows that 128k in len.  The
>>>> column family and column qualifier it writes out are 0 bytes long.
>>>> When the non compression test failed, it was trying to read a column
>>>> qualifier.  If we assume that it was reading a column qualifier from
>>>> the test table then it should be calling readFully() with a zero
>>>> length array.
>>>> 
>>>> Trying to think how to debug this.  One way may be to change the code
>>>> in RelativeKey to the following and run the test.  This will show us
>>>> what its trying to do right before it hits the eof, but it will also
>>>> generate a lot of noise as things scan the metadata table.
>>>> 
>>>>  private byte[] read(DataInput in) throws IOException {
>>>>    int len = WritableUtils.readVInt(in);
>>>>    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
>>>>    byte[] data = new byte[len];
>>>>    in.readFully(data);
>>>>    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
>>>> new String(data).substring(0, Math.min(data.length, 60)));
>>>>    return data;
>>>>  }
>>>> 
>>>> Keith
>>>> 
>>>> On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kb...@maprtech.com> wrote:
>>>>> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>>>>> 
>>>>> When I run it, this is the output I see:
>>>>> ./run.py -t largerowtest -d -v10
>>>>> ….
>>>>> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
>>>>> DEBUG:test.auto:{
>>>>> 'tserver.compaction.major.delay':'1',
>>>>> }
>>>>> 
>>>>> DEBUG:test.auto:
>>>>> INFO:test.auto:killing accumulo processes everywhere
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
>>>>> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
>>>>> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
>>>>> DEBUG:test.auto:Exit code: 255
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
>>>>> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
>>>>> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
>>>>> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
>>>>> Instance name : SE-test-04-22187
>>>>> Enter initial password for root: ******
>>>>> Confirm initial password for root: ******
>>>>> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>>>>> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
>>>>> DEBUG:test.auto:Exit code: 0
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
>>>>> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
>>>>> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
>>>>> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
>>>>> DEBUG:test.auto:Exit code: 0
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
>>>>> DEBUG:test.auto:
>>>>> DEBUG:test.auto:
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
>>>>> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
>>>>> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>>> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>> DEBUG:test.auto:err:
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>>        ... 6 more
>>>>> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>>        ... 11 more
>>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>> DEBUG:test.auto:err:
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>>        at $Proxy1.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>>        ... 13 more
>>>>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>>> java.lang.reflect.InvocationTargetException
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>>        ... 6 more
>>>>> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>>        ... 11 more
>>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>>        at $Proxy1.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>>        ... 13 more
>>>>> 
>>>>> FAIL
>>>>> ======================================================================
>>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>>> ----------------------------------------------------------------------
>>>>> Traceback (most recent call last):
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>>> AssertionError: False is not true
>>>>> 
>>>>> 
>>>>> ======================================================================
>>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>>> ----------------------------------------------------------------------
>>>>> Traceback (most recent call last):
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>>> AssertionError: False is not true
>>>>> 
>>>>> ----------------------------------------------------------------------
>>>>> Ran 1 test in 43.014s
>>>>> 
>>>>> FAILED (failures=1)
>>>>> 
>>>>> 
>>>>> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
>>>>> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>>>>> 
>>>>> 
>>>>> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>>> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
>>>>> java.io.IOException: invalid distance too far back
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>        ... 15 more
>>>>> Caused by: java.io.IOException: invalid distance too far back
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        ... 1 more
>>>>> 
>>>>> 
>>>>> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>>>>> 
>>>>> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>>>>> 
>>>>> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
>>>>> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
>>>>> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>>> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>>>> java.io.EOFException
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>        ... 15 more
>>>>> Caused by: java.io.EOFException
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>        ... 15 more
>>>>> Caused by: java.io.EOFException
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        ... 1 more
>>>>> 
>>>>> 
>>>>> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
>>>>> 1) the file was originally written incorrectly by the writer,
>>>>> 2) the reader is reading too far
>>>>> 
>>>>> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>>>>> 
>>>>> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>>>>> 
>>>>> Thanks,
>>>>> Keys
>>>>> ________________________________
>>>>> Keys Botzum
>>>>> Senior Principal Technologist
>>>>> WW Systems Engineering
>>>>> kbotzum@maprtech.com
>>>>> 443-718-0098
>>>>> MapR Technologies
>>>>> http://www.mapr.com
>>> 


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
I generated the hash for the second table, and its the same as the
first.  Makes sense, its the same data just split differently.  The
reason I did the sort is because there are multiple files.  Depending
on where the test fails, the second table may or may not have data.

$ for RF in `hadoop fs -ls
/user/kturner/accumulo-mac1-5520/tables/2/*/*.rf | awk '{print $NF}'`;
do ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
-d $RF; done | grep 'false ->' | cut -c 1-64 | sort | md5
addcfa8442914899a998d38bbf917d67

Keith

On Wed, Apr 11, 2012 at 12:35 PM, Keith Turner <ke...@deenlo.com> wrote:
> Keys
>
> This test uses a random number generator w/ a seed, so the test should
> always generate the same data.  I ran the test twice in dirty mode and
> then generated an md5 hash of the data.  Both times the hash was the
> same.  Can you try to do this and see if you get the same hash.
>
> $ ./run.py -d -t largerow
> $ ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
> -d /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
> | grep 'false ->' | cut -c 1-64 | md5
> addcfa8442914899a998d38bbf917d67
>
> I did the grep to only get key values, the print info command prints
> some summary info.  I did the cut inorder to just get row data,
> including the timestamp would make the md5sum always change. I ran
> this on a mac, I think on linux you will need to run md5sum.
>
> The test creates two tables.  The md5 is for first table which seems
> to have just one file.  I am seeing multiple files in the second
> table.  I will put together a command to md5sum the second table and
> send that shortly.
>
> $ hadoop fs -lsr /user/kturner/accumulo-mac1-5520/tables/1
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet
> -rw-r--r--   3 kturner supergroup   32617935 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000a
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000b
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000c
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000d
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000e
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000f
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000g
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000h
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000i
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000j
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000k
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000l
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000m
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000n
> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
> /user/kturner/accumulo-mac1-5520/tables/1/t-000000o
>
> Keith
>
> On Wed, Apr 11, 2012 at 9:48 AM, Keys Botzum <kb...@maprtech.com> wrote:
>> Keith,
>>
>> Thanks for the suggestion.  I made the change to the source as you suggested and rebuilt it using Maven (surprisingly easy).
>>
>> Here's the log from tserver now. Does this help at all? I can of course provide the complete log or logs if useful to you. I can also provide the temporary tables and such if that's useful.
>>
>>
>> 10 15:44:07,786 [cache.LruBlockCache] DEBUG: Block cache LRU eviction completed. Freed 2494168 bytes.  Priority Sizes: Single=3.2550507MB (3413168), Multi=1
>> 3.89547MB (14570456),Memory=0.0MB (0)
>> 10 15:44:07,798 [rfile.RelativeKey] DEBUG: len : 131072
>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s :OKl:2"cp>]yT(ZeP
>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>> 10 15:44:07,799 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:52572 2 1 entries in 0.03 secs, nbTimes = [25 25 25.00 1]
>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 65
>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.tP"RsUOI
>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 47
>> 10 15:44:07,833 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-SE-test-04-18004/tables/2/t-0000000/F000000p.rf
>> 10 15:44:07,834 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... T
>> RUNCATED<
>> java.io.EOFException
>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:381)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:135)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>>
>>
>> It appears to be attempting to read 47 bytes but isn't succeeding. Out of curiosity I changed the code to read what it could and print a warning. Here's the new code version:
>>
>>
>> And this is a snippet of the exception which occurs with that change. Everything else is the same. As you can see my hack gets us past the read of the key, but then the next read fails.
>>
>> 11 06:42:32,254 [rfile.RelativeKey] DEBUG: data :
>> 11 06:42:32,254 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:47993 2 1 entries in 0.02
>>  secs, nbTimes = [23 23 23.00 1]
>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 65
>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.t
>> P"RsUOI
>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 47
>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: MISSING BYTES!!: read 45
>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : )vRS>4 ?c>$Sgn#[QcscA!HAYcF;M_Jg3d&Jzc85$)6Y7^@^@
>> 11 06:42:32,288 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-
>> SE-test-04-32318/tables/2/t-0000000/F000000q.rf
>> 11 06:42:32,289 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\
>> ;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>> java.io.EOFException
>>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>        at org.apache.accumulo.core.data.Value.readFields(Value.java:156)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:585)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.j
>>       …..
>>
>> So it looks like we are missing quite a bit of data.
>>
>> Any help or ideas appreciated.
>>
>> Thanks,
>> Keys
>> ________________________________
>> Keys Botzum
>> Senior Principal Technologist
>> WW Systems Engineering
>> kbotzum@maprtech.com
>> 443-718-0098
>> MapR Technologies
>> http://www.mapr.com
>>
>>
>>
>> On Apr 10, 2012, at 5:23 PM, Keith Turner wrote:
>>
>>> Keys,
>>>
>>> Looking at the test, it writes out random rows that 128k in len.  The
>>> column family and column qualifier it writes out are 0 bytes long.
>>> When the non compression test failed, it was trying to read a column
>>> qualifier.  If we assume that it was reading a column qualifier from
>>> the test table then it should be calling readFully() with a zero
>>> length array.
>>>
>>> Trying to think how to debug this.  One way may be to change the code
>>> in RelativeKey to the following and run the test.  This will show us
>>> what its trying to do right before it hits the eof, but it will also
>>> generate a lot of noise as things scan the metadata table.
>>>
>>>  private byte[] read(DataInput in) throws IOException {
>>>    int len = WritableUtils.readVInt(in);
>>>    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
>>>    byte[] data = new byte[len];
>>>    in.readFully(data);
>>>    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
>>> new String(data).substring(0, Math.min(data.length, 60)));
>>>    return data;
>>>  }
>>>
>>> Keith
>>>
>>> On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kb...@maprtech.com> wrote:
>>>> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>>>>
>>>> When I run it, this is the output I see:
>>>> ./run.py -t largerowtest -d -v10
>>>> ….
>>>> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
>>>> DEBUG:test.auto:{
>>>> 'tserver.compaction.major.delay':'1',
>>>> }
>>>>
>>>> DEBUG:test.auto:
>>>> INFO:test.auto:killing accumulo processes everywhere
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
>>>> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
>>>> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
>>>> DEBUG:test.auto:Exit code: 255
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
>>>> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
>>>> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
>>>> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
>>>> Instance name : SE-test-04-22187
>>>> Enter initial password for root: ******
>>>> Confirm initial password for root: ******
>>>> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>>>> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
>>>> DEBUG:test.auto:Exit code: 0
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
>>>> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
>>>> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
>>>> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
>>>> DEBUG:test.auto:Exit code: 0
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
>>>> DEBUG:test.auto:
>>>> DEBUG:test.auto:
>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
>>>> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
>>>> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>> DEBUG:test.auto:err:
>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>        ... 6 more
>>>> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>        ... 11 more
>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>> DEBUG:test.auto:err:
>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>        at $Proxy1.startScan(Unknown Source)
>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>        ... 13 more
>>>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>> java.lang.reflect.InvocationTargetException
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>        ... 6 more
>>>> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>        ... 11 more
>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>        at $Proxy1.startScan(Unknown Source)
>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>        ... 13 more
>>>>
>>>> FAIL
>>>> ======================================================================
>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>> ----------------------------------------------------------------------
>>>> Traceback (most recent call last):
>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>> AssertionError: False is not true
>>>>
>>>>
>>>> ======================================================================
>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>> ----------------------------------------------------------------------
>>>> Traceback (most recent call last):
>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>> AssertionError: False is not true
>>>>
>>>> ----------------------------------------------------------------------
>>>> Ran 1 test in 43.014s
>>>>
>>>> FAILED (failures=1)
>>>>
>>>>
>>>> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
>>>> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>>>>
>>>>
>>>> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
>>>> java.io.IOException: invalid distance too far back
>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>        at $Proxy0.startScan(Unknown Source)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>        ... 15 more
>>>> Caused by: java.io.IOException: invalid distance too far back
>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        ... 1 more
>>>>
>>>>
>>>> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>>>>
>>>> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>>>>
>>>> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
>>>> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
>>>> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>>> java.io.EOFException
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>        at $Proxy0.startScan(Unknown Source)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>        ... 15 more
>>>> Caused by: java.io.EOFException
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>        at $Proxy0.startScan(Unknown Source)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>        at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>        ... 15 more
>>>> Caused by: java.io.EOFException
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>        ... 1 more
>>>>
>>>>
>>>> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
>>>> 1) the file was originally written incorrectly by the writer,
>>>> 2) the reader is reading too far
>>>>
>>>> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>>>>
>>>> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>>>>
>>>> Thanks,
>>>> Keys
>>>> ________________________________
>>>> Keys Botzum
>>>> Senior Principal Technologist
>>>> WW Systems Engineering
>>>> kbotzum@maprtech.com
>>>> 443-718-0098
>>>> MapR Technologies
>>>> http://www.mapr.com
>>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
Keys

This test uses a random number generator w/ a seed, so the test should
always generate the same data.  I ran the test twice in dirty mode and
then generated an md5 hash of the data.  Both times the hash was the
same.  Can you try to do this and see if you get the same hash.

$ ./run.py -d -t largerow
$ ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
-d /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
| grep 'false ->' | cut -c 1-64 | md5
addcfa8442914899a998d38bbf917d67

I did the grep to only get key values, the print info command prints
some summary info.  I did the cut inorder to just get row data,
including the timestamp would make the md5sum always change. I ran
this on a mac, I think on linux you will need to run md5sum.

The test creates two tables.  The md5 is for first table which seems
to have just one file.  I am seeing multiple files in the second
table.  I will put together a command to md5sum the second table and
send that shortly.

$ hadoop fs -lsr /user/kturner/accumulo-mac1-5520/tables/1
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/default_tablet
-rw-r--r--   3 kturner supergroup   32617935 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000a
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000b
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000c
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000d
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000e
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000f
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000g
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000h
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000i
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000j
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000k
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000l
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
/user/kturner/accumulo-mac1-5520/tables/1/t-000000m
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
/user/kturner/accumulo-mac1-5520/tables/1/t-000000n
drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
/user/kturner/accumulo-mac1-5520/tables/1/t-000000o

Keith

On Wed, Apr 11, 2012 at 9:48 AM, Keys Botzum <kb...@maprtech.com> wrote:
> Keith,
>
> Thanks for the suggestion.  I made the change to the source as you suggested and rebuilt it using Maven (surprisingly easy).
>
> Here's the log from tserver now. Does this help at all? I can of course provide the complete log or logs if useful to you. I can also provide the temporary tables and such if that's useful.
>
>
> 10 15:44:07,786 [cache.LruBlockCache] DEBUG: Block cache LRU eviction completed. Freed 2494168 bytes.  Priority Sizes: Single=3.2550507MB (3413168), Multi=1
> 3.89547MB (14570456),Memory=0.0MB (0)
> 10 15:44:07,798 [rfile.RelativeKey] DEBUG: len : 131072
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s :OKl:2"cp>]yT(ZeP
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
> 10 15:44:07,799 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:52572 2 1 entries in 0.03 secs, nbTimes = [25 25 25.00 1]
> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 65
> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.tP"RsUOI
> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 47
> 10 15:44:07,833 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-SE-test-04-18004/tables/2/t-0000000/F000000p.rf
> 10 15:44:07,834 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... T
> RUNCATED<
> java.io.EOFException
>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:381)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:135)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
>
>
> It appears to be attempting to read 47 bytes but isn't succeeding. Out of curiosity I changed the code to read what it could and print a warning. Here's the new code version:
>
>
> And this is a snippet of the exception which occurs with that change. Everything else is the same. As you can see my hack gets us past the read of the key, but then the next read fails.
>
> 11 06:42:32,254 [rfile.RelativeKey] DEBUG: data :
> 11 06:42:32,254 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:47993 2 1 entries in 0.02
>  secs, nbTimes = [23 23 23.00 1]
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 65
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.t
> P"RsUOI
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 47
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: MISSING BYTES!!: read 45
> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : )vRS>4 ?c>$Sgn#[QcscA!HAYcF;M_Jg3d&Jzc85$)6Y7^@^@
> 11 06:42:32,288 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-
> SE-test-04-32318/tables/2/t-0000000/F000000q.rf
> 11 06:42:32,289 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\
> ;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
> java.io.EOFException
>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>        at org.apache.accumulo.core.data.Value.readFields(Value.java:156)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:585)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.j
>       …..
>
> So it looks like we are missing quite a bit of data.
>
> Any help or ideas appreciated.
>
> Thanks,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com
>
>
>
> On Apr 10, 2012, at 5:23 PM, Keith Turner wrote:
>
>> Keys,
>>
>> Looking at the test, it writes out random rows that 128k in len.  The
>> column family and column qualifier it writes out are 0 bytes long.
>> When the non compression test failed, it was trying to read a column
>> qualifier.  If we assume that it was reading a column qualifier from
>> the test table then it should be calling readFully() with a zero
>> length array.
>>
>> Trying to think how to debug this.  One way may be to change the code
>> in RelativeKey to the following and run the test.  This will show us
>> what its trying to do right before it hits the eof, but it will also
>> generate a lot of noise as things scan the metadata table.
>>
>>  private byte[] read(DataInput in) throws IOException {
>>    int len = WritableUtils.readVInt(in);
>>    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
>>    byte[] data = new byte[len];
>>    in.readFully(data);
>>    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
>> new String(data).substring(0, Math.min(data.length, 60)));
>>    return data;
>>  }
>>
>> Keith
>>
>> On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kb...@maprtech.com> wrote:
>>> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>>>
>>> When I run it, this is the output I see:
>>> ./run.py -t largerowtest -d -v10
>>> ….
>>> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
>>> DEBUG:test.auto:{
>>> 'tserver.compaction.major.delay':'1',
>>> }
>>>
>>> DEBUG:test.auto:
>>> INFO:test.auto:killing accumulo processes everywhere
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
>>> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
>>> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
>>> DEBUG:test.auto:Exit code: 255
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
>>> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
>>> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
>>> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
>>> Instance name : SE-test-04-22187
>>> Enter initial password for root: ******
>>> Confirm initial password for root: ******
>>> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>>> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
>>> DEBUG:test.auto:Exit code: 0
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
>>> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
>>> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
>>> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
>>> DEBUG:test.auto:Exit code: 0
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
>>> DEBUG:test.auto:
>>> DEBUG:test.auto:
>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
>>> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
>>> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>> DEBUG:test.auto:err:
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>        ... 6 more
>>> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>        ... 11 more
>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>> DEBUG:test.auto:err:
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>        at $Proxy1.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>        ... 13 more
>>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>> java.lang.reflect.InvocationTargetException
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>        ... 6 more
>>> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>        ... 11 more
>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>        at $Proxy1.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>        ... 13 more
>>>
>>> FAIL
>>> ======================================================================
>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>> ----------------------------------------------------------------------
>>> Traceback (most recent call last):
>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>    self.waitForStop(handle, self.maxRuntime)
>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>> AssertionError: False is not true
>>>
>>>
>>> ======================================================================
>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>> ----------------------------------------------------------------------
>>> Traceback (most recent call last):
>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>    self.waitForStop(handle, self.maxRuntime)
>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>> AssertionError: False is not true
>>>
>>> ----------------------------------------------------------------------
>>> Ran 1 test in 43.014s
>>>
>>> FAILED (failures=1)
>>>
>>>
>>> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
>>> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>>>
>>>
>>> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
>>> java.io.IOException: invalid distance too far back
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>        at $Proxy0.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>        ... 15 more
>>> Caused by: java.io.IOException: invalid distance too far back
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        ... 1 more
>>>
>>>
>>> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>>>
>>> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>>>
>>> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
>>> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
>>> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>> java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>        at $Proxy0.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>        ... 15 more
>>> Caused by: java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>        at $Proxy0.startScan(Unknown Source)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>        ... 15 more
>>> Caused by: java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        ... 1 more
>>>
>>>
>>> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
>>> 1) the file was originally written incorrectly by the writer,
>>> 2) the reader is reading too far
>>>
>>> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>>>
>>> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>>>
>>> Thanks,
>>> Keys
>>> ________________________________
>>> Keys Botzum
>>> Senior Principal Technologist
>>> WW Systems Engineering
>>> kbotzum@maprtech.com
>>> 443-718-0098
>>> MapR Technologies
>>> http://www.mapr.com
>

Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keys Botzum <kb...@maprtech.com>.
Keith,

Thanks for the suggestion.  I made the change to the source as you suggested and rebuilt it using Maven (surprisingly easy).

Here's the log from tserver now. Does this help at all? I can of course provide the complete log or logs if useful to you. I can also provide the temporary tables and such if that's useful.


10 15:44:07,786 [cache.LruBlockCache] DEBUG: Block cache LRU eviction completed. Freed 2494168 bytes.  Priority Sizes: Single=3.2550507MB (3413168), Multi=1
3.89547MB (14570456),Memory=0.0MB (0)
10 15:44:07,798 [rfile.RelativeKey] DEBUG: len : 131072
10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s :OKl:2"cp>]yT(ZeP
10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : 
10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : 
10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : 
10 15:44:07,799 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:52572 2 1 entries in 0.03 secs, nbTimes = [25 25 25.00 1] 
10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 65
10 15:44:07,828 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.tP"RsUOI
10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 47
10 15:44:07,833 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-SE-test-04-18004/tables/2/t-0000000/F000000p.rf
10 15:44:07,834 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... T
RUNCATED<
java.io.EOFException
        at java.io.DataInputStream.readFully(DataInputStream.java:180)
        at java.io.DataInputStream.readFully(DataInputStream.java:152)
        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:381)
        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:135)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)


It appears to be attempting to read 47 bytes but isn't succeeding. Out of curiosity I changed the code to read what it could and print a warning. Here's the new code version:


And this is a snippet of the exception which occurs with that change. Everything else is the same. As you can see my hack gets us past the read of the key, but then the next read fails.

11 06:42:32,254 [rfile.RelativeKey] DEBUG: data : 
11 06:42:32,254 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:47993 2 1 entries in 0.02
 secs, nbTimes = [23 23 23.00 1] 
11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 65
11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.t
P"RsUOI
11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 47
11 06:42:32,283 [rfile.RelativeKey] DEBUG: MISSING BYTES!!: read 45
11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : )vRS>4 ?c>$Sgn#[QcscA!HAYcF;M_Jg3d&Jzc85$)6Y7^@^@
11 06:42:32,288 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-
SE-test-04-32318/tables/2/t-0000000/F000000q.rf
11 06:42:32,289 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\
;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
        at org.apache.accumulo.core.data.Value.readFields(Value.java:156)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:585)
        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.j
       …..

So it looks like we are missing quite a bit of data.

Any help or ideas appreciated.

Thanks,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 10, 2012, at 5:23 PM, Keith Turner wrote:

> Keys,
> 
> Looking at the test, it writes out random rows that 128k in len.  The
> column family and column qualifier it writes out are 0 bytes long.
> When the non compression test failed, it was trying to read a column
> qualifier.  If we assume that it was reading a column qualifier from
> the test table then it should be calling readFully() with a zero
> length array.
> 
> Trying to think how to debug this.  One way may be to change the code
> in RelativeKey to the following and run the test.  This will show us
> what its trying to do right before it hits the eof, but it will also
> generate a lot of noise as things scan the metadata table.
> 
>  private byte[] read(DataInput in) throws IOException {
>    int len = WritableUtils.readVInt(in);
>    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
>    byte[] data = new byte[len];
>    in.readFully(data);
>    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
> new String(data).substring(0, Math.min(data.length, 60)));
>    return data;
>  }
> 
> Keith
> 
> On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kb...@maprtech.com> wrote:
>> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>> 
>> When I run it, this is the output I see:
>> ./run.py -t largerowtest -d -v10
>> ….
>> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
>> DEBUG:test.auto:{
>> 'tserver.compaction.major.delay':'1',
>> }
>> 
>> DEBUG:test.auto:
>> INFO:test.auto:killing accumulo processes everywhere
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
>> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
>> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
>> DEBUG:test.auto:Exit code: 255
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
>> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
>> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
>> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
>> Instance name : SE-test-04-22187
>> Enter initial password for root: ******
>> Confirm initial password for root: ******
>> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
>> DEBUG:test.auto:Exit code: 0
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
>> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
>> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
>> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
>> DEBUG:test.auto:Exit code: 0
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
>> DEBUG:test.auto:
>> DEBUG:test.auto:
>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
>> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
>> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>> DEBUG:test.auto:err:
>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>        ... 6 more
>> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>        ... 11 more
>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>> DEBUG:test.auto:err:
>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>        at $Proxy1.startScan(Unknown Source)
>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>        ... 13 more
>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>> java.lang.reflect.InvocationTargetException
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>        ... 6 more
>> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>        ... 11 more
>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>        at $Proxy1.startScan(Unknown Source)
>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>        ... 13 more
>> 
>> FAIL
>> ======================================================================
>> FAIL: runTest (simple.largeRow.LargeRowTest)
>> ----------------------------------------------------------------------
>> Traceback (most recent call last):
>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>    self.waitForStop(handle, self.maxRuntime)
>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>    self.assert_(self.processResult(out, err, handle.returncode))
>> AssertionError: False is not true
>> 
>> 
>> ======================================================================
>> FAIL: runTest (simple.largeRow.LargeRowTest)
>> ----------------------------------------------------------------------
>> Traceback (most recent call last):
>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>    self.waitForStop(handle, self.maxRuntime)
>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>    self.assert_(self.processResult(out, err, handle.returncode))
>> AssertionError: False is not true
>> 
>> ----------------------------------------------------------------------
>> Ran 1 test in 43.014s
>> 
>> FAILED (failures=1)
>> 
>> 
>> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
>> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>> 
>> 
>> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
>> java.io.IOException: invalid distance too far back
>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>        at $Proxy0.startScan(Unknown Source)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>        ... 15 more
>> Caused by: java.io.IOException: invalid distance too far back
>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        ... 1 more
>> 
>> 
>> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>> 
>> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>> 
>> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
>> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
>> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>> java.io.EOFException
>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>        at $Proxy0.startScan(Unknown Source)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>        ... 15 more
>> Caused by: java.io.EOFException
>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>        at $Proxy0.startScan(Unknown Source)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>        at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>        ... 15 more
>> Caused by: java.io.EOFException
>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        ... 1 more
>> 
>> 
>> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
>> 1) the file was originally written incorrectly by the writer,
>> 2) the reader is reading too far
>> 
>> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>> 
>> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>> 
>> Thanks,
>> Keys
>> ________________________________
>> Keys Botzum
>> Senior Principal Technologist
>> WW Systems Engineering
>> kbotzum@maprtech.com
>> 443-718-0098
>> MapR Technologies
>> http://www.mapr.com


Re: Accumulo on MapR Continued - LargeRowTest

Posted by Keith Turner <ke...@deenlo.com>.
Keys,

Looking at the test, it writes out random rows that 128k in len.  The
column family and column qualifier it writes out are 0 bytes long.
When the non compression test failed, it was trying to read a column
qualifier.  If we assume that it was reading a column qualifier from
the test table then it should be calling readFully() with a zero
length array.

Trying to think how to debug this.  One way may be to change the code
in RelativeKey to the following and run the test.  This will show us
what its trying to do right before it hits the eof, but it will also
generate a lot of noise as things scan the metadata table.

  private byte[] read(DataInput in) throws IOException {
    int len = WritableUtils.readVInt(in);
    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
    byte[] data = new byte[len];
    in.readFully(data);
    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
new String(data).substring(0, Math.min(data.length, 60)));
    return data;
  }

Keith

On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kb...@maprtech.com> wrote:
> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>
> When I run it, this is the output I see:
> ./run.py -t largerowtest -d -v10
> ….
> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
> DEBUG:test.auto:{
> 'tserver.compaction.major.delay':'1',
> }
>
> DEBUG:test.auto:
> INFO:test.auto:killing accumulo processes everywhere
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
> DEBUG:test.auto:Exit code: 255
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
> Instance name : SE-test-04-22187
> Enter initial password for root: ******
> Confirm initial password for root: ******
> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
> DEBUG:test.auto:Exit code: 0
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
> DEBUG:test.auto:Exit code: 0
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
> DEBUG:test.auto:
> DEBUG:test.auto:
> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
> DEBUG:test.auto:err:
>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>        ... 6 more
> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>        ... 11 more
> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
> DEBUG:test.auto:err:
>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>        at $Proxy1.startScan(Unknown Source)
>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>        ... 13 more
> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
> java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>        ... 6 more
> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>        ... 11 more
> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>        at $Proxy1.startScan(Unknown Source)
>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>        ... 13 more
>
> FAIL
> ======================================================================
> FAIL: runTest (simple.largeRow.LargeRowTest)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>    self.waitForStop(handle, self.maxRuntime)
>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>    self.assert_(self.processResult(out, err, handle.returncode))
> AssertionError: False is not true
>
>
> ======================================================================
> FAIL: runTest (simple.largeRow.LargeRowTest)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>    self.waitForStop(handle, self.maxRuntime)
>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>    self.assert_(self.processResult(out, err, handle.returncode))
> AssertionError: False is not true
>
> ----------------------------------------------------------------------
> Ran 1 test in 43.014s
>
> FAILED (failures=1)
>
>
> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>
>
> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
> java.io.IOException: invalid distance too far back
>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>        at $Proxy0.startScan(Unknown Source)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>        ... 15 more
> Caused by: java.io.IOException: invalid distance too far back
>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        ... 1 more
>
>
> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>
> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>
> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
> java.io.EOFException
>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>        at $Proxy0.startScan(Unknown Source)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>        ... 15 more
> Caused by: java.io.EOFException
>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>        at $Proxy0.startScan(Unknown Source)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>        at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>        ... 15 more
> Caused by: java.io.EOFException
>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        ... 1 more
>
>
> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
> 1) the file was originally written incorrectly by the writer,
> 2) the reader is reading too far
>
> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>
> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>
> Thanks,
> Keys
> ________________________________
> Keys Botzum
> Senior Principal Technologist
> WW Systems Engineering
> kbotzum@maprtech.com
> 443-718-0098
> MapR Technologies
> http://www.mapr.com