You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@kylin.apache.org by "jianhui.yi" <ji...@zhiyoubao.com> on 2017/05/27 05:37:48 UTC

答复: 答复: table_snapshot file does not exist

Thanks,I fixed it.

 

发件人: Li Yang [mailto:liyang@apache.org] 
发送时间: 2017年5月27日 10:29
收件人: user@kylin.apache.org
主题: Re: 答复: table_snapshot file does not exist

 

It seems your Kylin metadata is somewhat corrupted. In the metadata there exists a snapshot of table PRODUCT_DIM, however its related physical file does not exist on HDFS.

You can manually fix the metadata, or if data rebuild is easy, delete all metadata and start over again.

 

On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <jianhui.yi@zhiyoubao.com <ma...@zhiyoubao.com> > wrote:

Is it a build error

 

发件人: Billy Liu [mailto:billyliu@apache.org <ma...@apache.org> ] 
发送时间: 2017年5月19日 11:00
收件人: user <user@kylin.apache.org <ma...@kylin.apache.org> >
主题: Re: table_snapshot file does not exist

 

Is it a build error? or query error? You mentioned two scenarios, but one exception. 

 

2017-05-18 14:25 GMT+08:00 jianhui.yi <jianhui.yi@zhiyoubao.com <ma...@zhiyoubao.com> >:

Hi all:

       When I build cube to run step 4: Build Dimension Dictionary, the following error occurred,how to solve it?

When I use the dimensions of this table, this error will appear.

 

java.io.FileNotFoundException: File does not exist: /kylin/kylin_metadata/resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)

                    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)

                    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)

                    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)

                    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

                    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

                    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)

                    at java.security.AccessController.doPrivileged(Native Method)

                    at javax.security.auth.Subject.doAs(Subject.java:415)

                    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)

                    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

 

                    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

                    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

                    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

                    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

                    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

                    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

                    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1281)

                    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1266)

                    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1254)

                    at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:305)

                    at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271)

                    at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:263)

                    at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1585)

                    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:309)

                    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:305)

                    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

                    at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:305)

                    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:779)

                    at org.apache.kylin.storage.hbase.HBaseResourceStore.getInputStream(HBaseResourceStore.java:207)

                    at org.apache.kylin.storage.hbase.HBaseResourceStore.getResourceImpl(HBaseResourceStore.java:227)

                    at org.apache.kylin.common.persistence.ResourceStore.getResource(ResourceStore.java:148)

                    at org.apache.kylin.dict.lookup.SnapshotManager.load(SnapshotManager.java:217)

                    at org.apache.kylin.dict.lookup.SnapshotManager.checkDupByInfo(SnapshotManager.java:182)

                    at org.apache.kylin.dict.lookup.SnapshotManager.buildSnapshot(SnapshotManager.java:128)

                    at org.apache.kylin.cube.CubeManager.buildSnapshotTable(CubeManager.java:285)

                    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:92)

                    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:54)

                    at org.apache.kylin.engine.mr <http://org.apache.kylin.engine.mr> .steps.CreateDictionaryJob.run(CreateDictionaryJob.java:66)

                    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

                    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

                    at org.apache.kylin.engine.mr <http://org.apache.kylin.engine.mr> .common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)

                    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)

                    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)

                    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)

                    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)

                    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

                    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

                    at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.ipc.RemoteException(java.io <http://java.io> .FileNotFoundException): File does not exist: /kylin/kylin_metadata/resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)

                    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)

                    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)

                    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)

                    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

                    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

                    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)

                    at java.security.AccessController.doPrivileged(Native Method)

                    at javax.security.auth.Subject.doAs(Subject.java:415)

                    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)

                    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

 

                    at org.apache.hadoop.ipc.Client.call(Client.java:1472)

                    at org.apache.hadoop.ipc.Client.call(Client.java:1409)

                    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)

                    at com.sun.proxy.$Proxy30.getBlockLocations(Unknown Source)

                    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256)

                    at sun.reflect.GeneratedMethodAccessor174.invoke(Unknown Source)

                    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

                    at java.lang.reflect.Method.invoke(Method.java:606)

                    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

                    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

                    at com.sun.proxy.$Proxy31.getBlockLocations(Unknown Source)

                    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1279)

                    ... 31 more

 

result code:2

 

 

 

 


Re: 答复: 答复: 答复: table_snapshot file does not exist

Posted by Li Yang <li...@apache.org>.
Sounds good.  :-)

On Sat, May 27, 2017 at 3:03 PM, jianhui.yi <ji...@zhiyoubao.com>
wrote:

> Aha, Stupid way:
>
> 1.     backup metadata
>
> 2.     drop all cubes and models
>
> 3.     unload that table
>
> 4.     load that table
>
> 5.     restore metadata.
>
>
>
> J
>
>
>
> *发件人:* Li Yang [mailto:liyang@apache.org]
> *发送时间:* 2017年5月27日 14:50
> *收件人:* user@kylin.apache.org
> *主题:* Re: 答复: 答复: table_snapshot file does not exist
>
>
>
> What has been done to fix this issue? Curious to know.
>
>
>
> On Sat, May 27, 2017 at 1:37 PM, jianhui.yi <ji...@zhiyoubao.com>
> wrote:
>
> Thanks,I fixed it.
>
>
>
> *发件人:* Li Yang [mailto:liyang@apache.org]
> *发送时间:* 2017年5月27日 10:29
> *收件人:* user@kylin.apache.org
> *主题:* Re: 答复: table_snapshot file does not exist
>
>
>
> It seems your Kylin metadata is somewhat corrupted. In the metadata there
> exists a snapshot of table PRODUCT_DIM, however its related physical file
> does not exist on HDFS.
>
> You can manually fix the metadata, or if data rebuild is easy, delete all
> metadata and start over again.
>
>
>
> On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <ji...@zhiyoubao.com>
> wrote:
>
> Is it a build error
>
>
>
> *发件人:* Billy Liu [mailto:billyliu@apache.org]
> *发送时间:* 2017年5月19日 11:00
> *收件人:* user <us...@kylin.apache.org>
> *主题:* Re: table_snapshot file does not exist
>
>
>
> Is it a build error? or query error? You mentioned two scenarios, but one
> exception.
>
>
>
> 2017-05-18 14:25 GMT+08:00 jianhui.yi <ji...@zhiyoubao.com>:
>
> Hi all:
>
>        When I build cube to run step 4: Build Dimension Dictionary, the
> following error occurred,how to solve it?
>
> When I use the dimensions of this table, this error will appear.
>
>
>
> java.io.FileNotFoundException: File does not exist: /kylin/kylin_metadata/
> resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-
> 46f8-833c-d28878629246.snapshot
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:66)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:56)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
>                     at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:89)
>
>                     at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
>                     at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
>
>                     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
>                     at org.apache.hadoop.ipc.RPC$
> Server.call(RPC.java:1073)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2141)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2137)
>
>                     at java.security.AccessController.doPrivileged(Native
> Method)
>
>                     at javax.security.auth.Subject.doAs(Subject.java:415)
>
>                     at org.apache.hadoop.security.
> UserGroupInformation.doAs(UserGroupInformation.java:1783)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler.run(Server.java:2135)
>
>
>
>                     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                     at sun.reflect.NativeConstructorAccessorImpl.
> newInstance(NativeConstructorAccessorImpl.java:57)
>
>                     at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                     at java.lang.reflect.Constructor.
> newInstance(Constructor.java:526)
>
>                     at org.apache.hadoop.ipc.RemoteException.
> instantiateException(RemoteException.java:106)
>
>                     at org.apache.hadoop.ipc.RemoteException.
> unwrapRemoteException(RemoteException.java:73)
>
>                     at org.apache.hadoop.hdfs.DFSClient.
> callGetBlockLocations(DFSClient.java:1281)
>
>                     at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1266)
>
>                     at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1254)
>
>                     at org.apache.hadoop.hdfs.DFSInputStream.
> fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:305)
>
>                     at org.apache.hadoop.hdfs.DFSInputStream.openInfo(
> DFSInputStream.java:271)
>
>                     at org.apache.hadoop.hdfs.DFSInputStream.<init>(
> DFSInputStream.java:263)
>
>                     at org.apache.hadoop.hdfs.
> DFSClient.open(DFSClient.java:1585)
>
>                     at org.apache.hadoop.hdfs.DistributedFileSystem$3.
> doCall(DistributedFileSystem.java:309)
>
>                     at org.apache.hadoop.hdfs.DistributedFileSystem$3.
> doCall(DistributedFileSystem.java:305)
>
>                     at org.apache.hadoop.fs.FileSystemLinkResolver.
> resolve(FileSystemLinkResolver.java:81)
>
>                     at org.apache.hadoop.hdfs.DistributedFileSystem.open(
> DistributedFileSystem.java:305)
>
>                     at org.apache.hadoop.fs.FileSystem.open(FileSystem.
> java:779)
>
>                     at org.apache.kylin.storage.hbase.HBaseResourceStore.
> getInputStream(HBaseResourceStore.java:207)
>
>                     at org.apache.kylin.storage.hbase.HBaseResourceStore.
> getResourceImpl(HBaseResourceStore.java:227)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> getResource(ResourceStore.java:148)
>
>                     at org.apache.kylin.dict.lookup.SnapshotManager.load(
> SnapshotManager.java:217)
>
>                     at org.apache.kylin.dict.lookup.SnapshotManager.
> checkDupByInfo(SnapshotManager.java:182)
>
>                     at org.apache.kylin.dict.lookup.
> SnapshotManager.buildSnapshot(SnapshotManager.java:128)
>
>                     at org.apache.kylin.cube.CubeManager.
> buildSnapshotTable(CubeManager.java:285)
>
>                     at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.
> processSegment(DictionaryGeneratorCLI.java:92)
>
>                     at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.
> processSegment(DictionaryGeneratorCLI.java:54)
>
>                     at org.apache.kylin.engine.mr.
> steps.CreateDictionaryJob.run(CreateDictionaryJob.java:66)
>
>                     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.
> java:70)
>
>                     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.
> java:84)
>
>                     at org.apache.kylin.engine.mr.
> common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
>
>                     at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
>
>                     at org.apache.kylin.job.execution.
> DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)
>
>                     at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
>
>                     at org.apache.kylin.job.impl.
> threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)
>
>                     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>
>                     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>
>                     at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException):
> File does not exist: /kylin/kylin_metadata/resources/table_snapshot/DW.
> DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:66)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:56)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
>                     at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:89)
>
>                     at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
>                     at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
>
>                     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
>                     at org.apache.hadoop.ipc.RPC$
> Server.call(RPC.java:1073)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2141)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2137)
>
>                     at java.security.AccessController.doPrivileged(Native
> Method)
>
>                     at javax.security.auth.Subject.doAs(Subject.java:415)
>
>                     at org.apache.hadoop.security.
> UserGroupInformation.doAs(UserGroupInformation.java:1783)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler.run(Server.java:2135)
>
>
>
>                     at org.apache.hadoop.ipc.Client.call(Client.java:1472)
>
>                     at org.apache.hadoop.ipc.Client.call(Client.java:1409)
>
>                     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:230)
>
>                     at com.sun.proxy.$Proxy30.getBlockLocations(Unknown
> Source)
>
>                     at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolTranslatorPB.java:256)
>
>                     at sun.reflect.GeneratedMethodAccessor174.invoke(Unknown
> Source)
>
>                     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                     at java.lang.reflect.Method.invoke(Method.java:606)
>
>                     at org.apache.hadoop.io.retry.RetryInvocationHandler.
> invokeMethod(RetryInvocationHandler.java:256)
>
>                     at org.apache.hadoop.io.retry.
> RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>
>                     at com.sun.proxy.$Proxy31.getBlockLocations(Unknown
> Source)
>
>                     at org.apache.hadoop.hdfs.DFSClient.
> callGetBlockLocations(DFSClient.java:1279)
>
>                     ... 31 more
>
>
>
> result code:2
>
>
>
>
>
>
>
>
>
>
>

答复: 答复: 答复: table_snapshot file does not exist

Posted by "jianhui.yi" <ji...@zhiyoubao.com>.
Aha, Stupid way:

1.     backup metadata

2.     drop all cubes and models

3.     unload that table

4.     load that table

5.     restore metadata.

 

J

 

发件人: Li Yang [mailto:liyang@apache.org] 
发送时间: 2017年5月27日 14:50
收件人: user@kylin.apache.org
主题: Re: 答复: 答复: table_snapshot file does not exist

 

What has been done to fix this issue? Curious to know.

 

On Sat, May 27, 2017 at 1:37 PM, jianhui.yi <jianhui.yi@zhiyoubao.com <ma...@zhiyoubao.com> > wrote:

Thanks,I fixed it.

 

发件人: Li Yang [mailto:liyang@apache.org <ma...@apache.org> ] 
发送时间: 2017年5月27日 10:29
收件人: user@kylin.apache.org <ma...@kylin.apache.org> 
主题: Re: 答复: table_snapshot file does not exist

 

It seems your Kylin metadata is somewhat corrupted. In the metadata there exists a snapshot of table PRODUCT_DIM, however its related physical file does not exist on HDFS.

You can manually fix the metadata, or if data rebuild is easy, delete all metadata and start over again.

 

On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <jianhui.yi@zhiyoubao.com <ma...@zhiyoubao.com> > wrote:

Is it a build error

 

发件人: Billy Liu [mailto:billyliu@apache.org <ma...@apache.org> ] 
发送时间: 2017年5月19日 11:00
收件人: user <user@kylin.apache.org <ma...@kylin.apache.org> >
主题: Re: table_snapshot file does not exist

 

Is it a build error? or query error? You mentioned two scenarios, but one exception. 

 

2017-05-18 14:25 GMT+08:00 jianhui.yi <jianhui.yi@zhiyoubao.com <ma...@zhiyoubao.com> >:

Hi all:

       When I build cube to run step 4: Build Dimension Dictionary, the following error occurred,how to solve it?

When I use the dimensions of this table, this error will appear.

 

java.io.FileNotFoundException: File does not exist: /kylin/kylin_metadata/resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)

                    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)

                    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)

                    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)

                    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

                    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

                    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)

                    at java.security.AccessController.doPrivileged(Native Method)

                    at javax.security.auth.Subject.doAs(Subject.java:415)

                    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)

                    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

 

                    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

                    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

                    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

                    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

                    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

                    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

                    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1281)

                    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1266)

                    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1254)

                    at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:305)

                    at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:271)

                    at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:263)

                    at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1585)

                    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:309)

                    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:305)

                    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

                    at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:305)

                    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:779)

                    at org.apache.kylin.storage.hbase.HBaseResourceStore.getInputStream(HBaseResourceStore.java:207)

                    at org.apache.kylin.storage.hbase.HBaseResourceStore.getResourceImpl(HBaseResourceStore.java:227)

                    at org.apache.kylin.common.persistence.ResourceStore.getResource(ResourceStore.java:148)

                    at org.apache.kylin.dict.lookup.SnapshotManager.load(SnapshotManager.java:217)

                    at org.apache.kylin.dict.lookup.SnapshotManager.checkDupByInfo(SnapshotManager.java:182)

                    at org.apache.kylin.dict.lookup.SnapshotManager.buildSnapshot(SnapshotManager.java:128)

                    at org.apache.kylin.cube.CubeManager.buildSnapshotTable(CubeManager.java:285)

                    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:92)

                    at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.processSegment(DictionaryGeneratorCLI.java:54)

                    at org.apache.kylin.engine.mr <http://org.apache.kylin.engine.mr> .steps.CreateDictionaryJob.run(CreateDictionaryJob.java:66)

                    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

                    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

                    at org.apache.kylin.engine.mr <http://org.apache.kylin.engine.mr> .common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)

                    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)

                    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)

                    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:124)

                    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)

                    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

                    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

                    at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.ipc.RemoteException(java.io <http://java.io> .FileNotFoundException): File does not exist: /kylin/kylin_metadata/resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)

                    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)

                    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)

                    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)

                    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89)

                    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)

                    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

                    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

                    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)

                    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)

                    at java.security.AccessController.doPrivileged(Native Method)

                    at javax.security.auth.Subject.doAs(Subject.java:415)

                    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)

                    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

 

                    at org.apache.hadoop.ipc.Client.call(Client.java:1472)

                    at org.apache.hadoop.ipc.Client.call(Client.java:1409)

                    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)

                    at com.sun.proxy.$Proxy30.getBlockLocations(Unknown Source)

                    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256)

                    at sun.reflect.GeneratedMethodAccessor174.invoke(Unknown Source)

                    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

                    at java.lang.reflect.Method.invoke(Method.java:606)

                    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

                    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

                    at com.sun.proxy.$Proxy31.getBlockLocations(Unknown Source)

                    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1279)

                    ... 31 more

 

result code:2

 

 

 

 

 


Re: 答复: 答复: table_snapshot file does not exist

Posted by Li Yang <li...@apache.org>.
What has been done to fix this issue? Curious to know.

On Sat, May 27, 2017 at 1:37 PM, jianhui.yi <ji...@zhiyoubao.com>
wrote:

> Thanks,I fixed it.
>
>
>
> *发件人:* Li Yang [mailto:liyang@apache.org]
> *发送时间:* 2017年5月27日 10:29
> *收件人:* user@kylin.apache.org
> *主题:* Re: 答复: table_snapshot file does not exist
>
>
>
> It seems your Kylin metadata is somewhat corrupted. In the metadata there
> exists a snapshot of table PRODUCT_DIM, however its related physical file
> does not exist on HDFS.
>
> You can manually fix the metadata, or if data rebuild is easy, delete all
> metadata and start over again.
>
>
>
> On Fri, May 19, 2017 at 11:03 AM, jianhui.yi <ji...@zhiyoubao.com>
> wrote:
>
> Is it a build error
>
>
>
> *发件人:* Billy Liu [mailto:billyliu@apache.org]
> *发送时间:* 2017年5月19日 11:00
> *收件人:* user <us...@kylin.apache.org>
> *主题:* Re: table_snapshot file does not exist
>
>
>
> Is it a build error? or query error? You mentioned two scenarios, but one
> exception.
>
>
>
> 2017-05-18 14:25 GMT+08:00 jianhui.yi <ji...@zhiyoubao.com>:
>
> Hi all:
>
>        When I build cube to run step 4: Build Dimension Dictionary, the
> following error occurred,how to solve it?
>
> When I use the dimensions of this table, this error will appear.
>
>
>
> java.io.FileNotFoundException: File does not exist: /kylin/kylin_metadata/
> resources/table_snapshot/DW.DIM_PRODUCT/1394db19-c200-
> 46f8-833c-d28878629246.snapshot
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:66)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:56)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
>                     at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:89)
>
>                     at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
>                     at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
>
>                     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
>                     at org.apache.hadoop.ipc.RPC$
> Server.call(RPC.java:1073)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2141)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2137)
>
>                     at java.security.AccessController.doPrivileged(Native
> Method)
>
>                     at javax.security.auth.Subject.doAs(Subject.java:415)
>
>                     at org.apache.hadoop.security.
> UserGroupInformation.doAs(UserGroupInformation.java:1783)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler.run(Server.java:2135)
>
>
>
>                     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                     at sun.reflect.NativeConstructorAccessorImpl.
> newInstance(NativeConstructorAccessorImpl.java:57)
>
>                     at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                     at java.lang.reflect.Constructor.
> newInstance(Constructor.java:526)
>
>                     at org.apache.hadoop.ipc.RemoteException.
> instantiateException(RemoteException.java:106)
>
>                     at org.apache.hadoop.ipc.RemoteException.
> unwrapRemoteException(RemoteException.java:73)
>
>                     at org.apache.hadoop.hdfs.DFSClient.
> callGetBlockLocations(DFSClient.java:1281)
>
>                     at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1266)
>
>                     at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(
> DFSClient.java:1254)
>
>                     at org.apache.hadoop.hdfs.DFSInputStream.
> fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:305)
>
>                     at org.apache.hadoop.hdfs.DFSInputStream.openInfo(
> DFSInputStream.java:271)
>
>                     at org.apache.hadoop.hdfs.DFSInputStream.<init>(
> DFSInputStream.java:263)
>
>                     at org.apache.hadoop.hdfs.
> DFSClient.open(DFSClient.java:1585)
>
>                     at org.apache.hadoop.hdfs.DistributedFileSystem$3.
> doCall(DistributedFileSystem.java:309)
>
>                     at org.apache.hadoop.hdfs.DistributedFileSystem$3.
> doCall(DistributedFileSystem.java:305)
>
>                     at org.apache.hadoop.fs.FileSystemLinkResolver.
> resolve(FileSystemLinkResolver.java:81)
>
>                     at org.apache.hadoop.hdfs.DistributedFileSystem.open(
> DistributedFileSystem.java:305)
>
>                     at org.apache.hadoop.fs.FileSystem.open(FileSystem.
> java:779)
>
>                     at org.apache.kylin.storage.hbase.HBaseResourceStore.
> getInputStream(HBaseResourceStore.java:207)
>
>                     at org.apache.kylin.storage.hbase.HBaseResourceStore.
> getResourceImpl(HBaseResourceStore.java:227)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> getResource(ResourceStore.java:148)
>
>                     at org.apache.kylin.dict.lookup.SnapshotManager.load(
> SnapshotManager.java:217)
>
>                     at org.apache.kylin.dict.lookup.SnapshotManager.
> checkDupByInfo(SnapshotManager.java:182)
>
>                     at org.apache.kylin.dict.lookup.
> SnapshotManager.buildSnapshot(SnapshotManager.java:128)
>
>                     at org.apache.kylin.cube.CubeManager.
> buildSnapshotTable(CubeManager.java:285)
>
>                     at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.
> processSegment(DictionaryGeneratorCLI.java:92)
>
>                     at org.apache.kylin.cube.cli.DictionaryGeneratorCLI.
> processSegment(DictionaryGeneratorCLI.java:54)
>
>                     at org.apache.kylin.engine.mr.
> steps.CreateDictionaryJob.run(CreateDictionaryJob.java:66)
>
>                     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.
> java:70)
>
>                     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.
> java:84)
>
>                     at org.apache.kylin.engine.mr.
> common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
>
>                     at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
>
>                     at org.apache.kylin.job.execution.
> DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:64)
>
>                     at org.apache.kylin.job.execution.AbstractExecutable.
> execute(AbstractExecutable.java:124)
>
>                     at org.apache.kylin.job.impl.
> threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:142)
>
>                     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>
>                     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>
>                     at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException):
> File does not exist: /kylin/kylin_metadata/resources/table_snapshot/DW.
> DIM_PRODUCT/1394db19-c200-46f8-833c-d28878629246.snapshot
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:66)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.INodeFile.valueOf(INodeFile.java:56)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2007)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1977)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1890)
>
>                     at org.apache.hadoop.hdfs.server.
> namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572)
>
>                     at org.apache.hadoop.hdfs.server.namenode.
> AuthorizationProviderProxyClientProtocol.getBlockLocations(
> AuthorizationProviderProxyClientProtocol.java:89)
>
>                     at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolServerSideTranslatorPB.java:365)
>
>                     at org.apache.hadoop.hdfs.protocol.proto.
> ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.java)
>
>                     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>
>                     at org.apache.hadoop.ipc.RPC$
> Server.call(RPC.java:1073)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2141)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler$1.run(Server.java:2137)
>
>                     at java.security.AccessController.doPrivileged(Native
> Method)
>
>                     at javax.security.auth.Subject.doAs(Subject.java:415)
>
>                     at org.apache.hadoop.security.
> UserGroupInformation.doAs(UserGroupInformation.java:1783)
>
>                     at org.apache.hadoop.ipc.Server$
> Handler.run(Server.java:2135)
>
>
>
>                     at org.apache.hadoop.ipc.Client.call(Client.java:1472)
>
>                     at org.apache.hadoop.ipc.Client.call(Client.java:1409)
>
>                     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:230)
>
>                     at com.sun.proxy.$Proxy30.getBlockLocations(Unknown
> Source)
>
>                     at org.apache.hadoop.hdfs.protocolPB.
> ClientNamenodeProtocolTranslatorPB.getBlockLocations(
> ClientNamenodeProtocolTranslatorPB.java:256)
>
>                     at sun.reflect.GeneratedMethodAccessor174.invoke(Unknown
> Source)
>
>                     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                     at java.lang.reflect.Method.invoke(Method.java:606)
>
>                     at org.apache.hadoop.io.retry.RetryInvocationHandler.
> invokeMethod(RetryInvocationHandler.java:256)
>
>                     at org.apache.hadoop.io.retry.
> RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>
>                     at com.sun.proxy.$Proxy31.getBlockLocations(Unknown
> Source)
>
>                     at org.apache.hadoop.hdfs.DFSClient.
> callGetBlockLocations(DFSClient.java:1279)
>
>                     ... 31 more
>
>
>
> result code:2
>
>
>
>
>
>
>
>
>