You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Lemon Cheng <le...@gmail.com> on 2011/06/17 14:11:36 UTC

Query about "hadoop dfs -cat" in hadoop-0-0.20.2

Hi,

I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type
"hadoop dfs -ls".
However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01",
the error is shown as follow.
I have searched the related problem in the web, but i can't find a solution
for helping me to solve this problem.
Anyone can give suggestion?
Many Thanks.



11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block:
blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block
blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
nodes contain current block
11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block:
blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block
blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
nodes contain current block
11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block:
blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block
blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
nodes contain current block
11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could
not obtain block: blk_7095683278339921538_1029
file=/usr/lemon/wordcount/input/file01
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
        at java.io.DataInputStream.read(DataInputStream.java:83)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
        at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
        at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
        at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
        at
org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
        at org.apache.hadoop.fs.FsShell.cat<http://org.apache.hadoop.fs.fsshell.cat/>
(FsShell.java:346)
        at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)


Regards,
Lemon

Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2

Posted by Lemon Cheng <le...@gmail.com>.
Nothing can see of slaves command. Am I missing something ?
Background: When my first time installation of hadoop on last month, i
followed the instructions of  mapreduce wordcount example, and it works.
                   And this is the second time i use, the computer is
restarted, and call bin/start-all.sh,  and then i can't do that.

[appuser@localhost hadoop-0.20.2]$ ./bin/slaves.sh jps | grep Datanode |sort
appuser@localhost's password:
[appuser@localhost hadoop-0.20.2]$ ./bin/hadoop dfsadmin -report
Safe mode is ON
Configured Capacity: 470117756928 (437.83 GB)
Present Capacity: 98024734720 (91.29 GB)
DFS Remaining: 98024710144 (91.29 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 470117756928 (437.83 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 372093022208 (346.54 GB)
DFS Remaining: 98024710144(91.29 GB)
DFS Used%: 0%
DFS Remaining%: 20.85%
Last contact: Fri Jun 17 23:50:27 HKT 2011

-------------------------------------
NameNode 'localhost.localdomain:9000'

NameNode Storage:
Storage Directory                 Type                         State
/tmp/hadoop-appuser/dfs/name IMAGE_AND_EDITS Active
--------------------------------------



Regards,
Lemon

On Sat, Jun 18, 2011 at 12:09 AM, Marcos Ortiz <ml...@uci.cu> wrote:

> **
> On 06/17/2011 09:51 AM, Lemon Cheng wrote:
>
> Hi,
>
>  Thanks for your reply.
> I am not sure that. How can I prove that?
>
> Which is your dfs.tmp.dir and dfs.data.dir values?
>
> You can check the DataNodes´s health with bin/slaves.sh jps | grep Datanode
> | sort
>
> Which is the output of bin/hadoop dfsadmin -report?
>
> One recomendation that I could say you is to have at least 1 NameNode and
> two Datanodes
>
> regards
>
>
>  I checked the localhost:50070, it shows 1 live node and 0 dead node.
> And  the log "hadoop-appuser-datanode-localhost.localdomain.log" shows:
>
>  ************************************************************/
> 2011-06-17 19:59:38,658 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-06-17 19:59:46,738 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2011-06-17 19:59:46,749 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
> 2011-06-17 19:59:46,752 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50075
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50075
> 2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
> 2011-06-17 20:01:45,702 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2011-06-17 20:01:45,751 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> DatanodeRegistration(localhost.localdomain:50010,
> storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075,
> ipcPort=50020)
> 2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
>  2011-06-17 20:01:45,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 127.0.0.1:50010, storageID=DS-993704729-127.0.0.1-50010-1308296320968,
> infoPort=50075, ipcPort=50020)In DataNode.run, data =
> FSDataset{dirpath='/tmp/hadoop-appuser/dfs/data/current'}
>
>
>
>  2011-06-17 20:01:45,799 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
> of 3600000msec Initial delay: 0msec
> 2011-06-17 20:01:45,828 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
> processed in 11 msecs
> 2011-06-17 20:01:45,833 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
> scanner.
> 2011-06-17 20:56:02,945 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
> processed in 1 msecs
> 2011-06-17 21:56:02,248 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
> processed in 1 msecs
>
>
> On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz <ml...@uci.cu> wrote:
>
>>   On 06/17/2011 07:41 AM, Lemon Cheng wrote:
>>
>> Hi,
>>
>>  I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type
>> "hadoop dfs -ls".
>> However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01",
>> the error is shown as follow.
>> I have searched the related problem in the web, but i can't find a
>> solution for helping me to solve this problem.
>> Anyone can give suggestion?
>> Many Thanks.
>>
>>
>>
>>  11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block:
>> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block
>> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
>> nodes contain current block
>> 11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block:
>> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block
>> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
>> nodes contain current block
>> 11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block:
>> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block
>> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
>> nodes contain current block
>> 11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException:
>> Could not obtain block: blk_7095683278339921538_1029
>> file=/usr/lemon/wordcount/input/file01
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>         at java.io.DataInputStream.read(DataInputStream.java:83)
>>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
>>         at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
>>         at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
>>         at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
>>         at
>> org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
>>         at org.apache.hadoop.fs.FsShell.cat<http://org.apache.hadoop.fs.fsshell.cat/>
>> (FsShell.java:346)
>>         at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
>>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)
>>
>>
>>  Regards,
>> Lemon
>>
>>  Are you sure that all your DataNodes are online?
>>
>>
>> --
>> Marcos Luís Ortíz Valmaseda
>>  Software Engineer (UCI)
>>  http://marcosluis2186.posterous.com
>>  http://twitter.com/marcosluis2186
>>
>>
>
>
> --
> Marcos Luís Ortíz Valmaseda
>  Software Engineer (UCI)
>  http://marcosluis2186.posterous.com
>  http://twitter.com/marcosluis2186
>
>

Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2

Posted by Marcos Ortiz <ml...@uci.cu>.
On 06/17/2011 09:51 AM, Lemon Cheng wrote:
> Hi,
>
> Thanks for your reply.
> I am not sure that. How can I prove that?
Which is your dfs.tmp.dir and dfs.data.dir values?

You can check the DataNodes´s health with bin/slaves.sh jps | grep 
Datanode | sort

Which is the output of bin/hadoop dfsadmin -report?

One recomendation that I could say you is to have at least 1 NameNode 
and two Datanodes

regards
>
> I checked the localhost:50070, it shows 1 live node and 0 dead node.
> And  the log "hadoop-appuser-datanode-localhost.localdomain.log" shows:
> ************************************************************/
> 2011-06-17 19:59:38,658 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1 <http://127.0.0.1>
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build = 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-06-17 19:59:46,738 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered 
> FSDatasetStatusMBean
> 2011-06-17 19:59:46,749 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 
> 50010
> 2011-06-17 19:59:46,752 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 
> 1048576 bytes/s
> 2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to 
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
> org.mortbay.log.Slf4jLog
> 2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port 
> returned by webServer.getConnectors()[0].getLocalPort() before open() 
> is -1. Opening the listener on 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: 
> listener.getLocalPort() returned 50075 
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty 
> bound to port 50075
> 2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
> 2011-06-17 20:01:45,702 INFO org.mortbay.log: Started 
> SelectChannelConnector@0.0.0.0:50075 
> <http://SelectChannelConnector@0.0.0.0:50075>
> 2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2011-06-17 20:01:45,751 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = 
> DatanodeRegistration(localhost.localdomain:50010, 
> storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075, 
> ipcPort=50020)
> 2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 50020: starting
> 2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server 
> handler 2 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server 
> handler 0 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server 
> handler 1 on 50020: starting
> 2011-06-17 20:01:45,795 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DatanodeRegistration(127.0.0.1:50010 <http://127.0.0.1:50010>, 
> storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075, 
> ipcPort=50020)In DataNode.run, data = 
> FSDataset{dirpath='/tmp/hadoop-appuser/dfs/data/current'}


> 2011-06-17 20:01:45,799 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: using 
> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
> 2011-06-17 20:01:45,828 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 
> blocks got processed in 11 msecs
> 2011-06-17 20:01:45,833 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic 
> block scanner.
> 2011-06-17 20:56:02,945 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 
> blocks got processed in 1 msecs
> 2011-06-17 21:56:02,248 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 
> blocks got processed in 1 msecs
>
>
> On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz <mlortiz@uci.cu 
> <ma...@uci.cu>> wrote:
>
>     On 06/17/2011 07:41 AM, Lemon Cheng wrote:
>>     Hi,
>>
>>     I am using the hadoop-0.20.2. After calling ./start-all.sh, i can
>>     type "hadoop dfs -ls".
>>     However, when i type "hadoop dfs -cat
>>     /usr/lemon/wordcount/input/file01", the error is shown as follow.
>>     I have searched the related problem in the web, but i can't find
>>     a solution for helping me to solve this problem.
>>     Anyone can give suggestion?
>>     Many Thanks.
>>
>>
>>
>>     11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for
>>     block: blk_7095683278339921538_1029
>>     file=/usr/lemon/wordcount/input/file01
>>     11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block
>>     blk_7095683278339921538_1029 from any node:  java.io.IOException:
>>     No live nodes contain current block
>>     11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for
>>     block: blk_7095683278339921538_1029
>>     file=/usr/lemon/wordcount/input/file01
>>     11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block
>>     blk_7095683278339921538_1029 from any node:  java.io.IOException:
>>     No live nodes contain current block
>>     11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for
>>     block: blk_7095683278339921538_1029
>>     file=/usr/lemon/wordcount/input/file01
>>     11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block
>>     blk_7095683278339921538_1029 from any node:  java.io.IOException:
>>     No live nodes contain current block
>>     11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read:
>>     java.io.IOException: Could not obtain block:
>>     blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>>             at
>>     org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>             at
>>     org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>             at
>>     org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>             at java.io.DataInputStream.read(DataInputStream.java:83)
>>             at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>>             at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
>>             at
>>     org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
>>             at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
>>             at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
>>             at
>>     org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
>>             at org.apache.hadoop.fs.FsShell.cat
>>     <http://org.apache.hadoop.fs.fsshell.cat/>(FsShell.java:346)
>>             at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
>>             at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
>>             at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>             at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>             at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)
>>
>>
>>     Regards,
>>     Lemon
>     Are you sure that all your DataNodes are online?
>
>
>     -- 
>     Marcos Luís Ortíz Valmaseda
>       Software Engineer (UCI)
>       http://marcosluis2186.posterous.com
>       http://twitter.com/marcosluis2186
>
>


-- 
Marcos Luís Ortíz Valmaseda
  Software Engineer (UCI)
  http://marcosluis2186.posterous.com
  http://twitter.com/marcosluis2186



Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2

Posted by Lemon Cheng <le...@gmail.com>.
[appuser@localhost bin]$ ./hadoop dfs -ls
Found 2 items
drwxr-xr-x   - appuser supergroup          0 2011-04-26 09:41
/user/appuser/input
drwxr-xr-x   - appuser supergroup          0 2011-04-26 09:42
/user/appuser/output
[appuser@localhost bin]$ ./hadoop dfs -ls /usr/lemon/wordcount/input
Found 2 items
-rw-r--r--   1 appuser supergroup         22 2011-04-26 12:16
/usr/lemon/wordcount/input/file01
-rw-r--r--   1 appuser supergroup         28 2011-04-26 12:16
/usr/lemon/wordcount/input/file02
[appuser@localhost bin]$



On Fri, Jun 17, 2011 at 10:50 PM, Mostafa Gaber <mo...@gmail.com>wrote:

> Can you send us the output of "hadoop dfs -ls"?
>
>
>
> On Jun 17, 2011, at 10:21, Lemon Cheng <le...@gmail.com> wrote:
>
> Hi,
>
> Thanks for your reply.
> I am not sure that. How can I prove that?
>
> I checked the localhost:50070, it shows 1 live node and 0 dead node.
> And  the log "hadoop-appuser-datanode-localhost.localdomain.log" shows:
>
> ************************************************************/
> 2011-06-17 19:59:38,658 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> <https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20>
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-06-17 19:59:46,738 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2011-06-17 19:59:46,749 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
> 2011-06-17 19:59:46,752 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50075
> webServer.getConnectors()[0].getLocalPort() returned 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50075
> 2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
> 2011-06-17 20:01:45,702 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2011-06-17 20:01:45,751 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> DatanodeRegistration(localhost.localdomain:50010,
> storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075,
> ipcPort=50020)
> 2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
> 2011-06-17 20:01:45,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 127.0.0.1:50010, storageID=DS-993704729-127.0.0.1-50010-1308296320968,
> infoPort=50075, ipcPort=50020)In DataNode.run, data =
> FSDataset{dirpath='/tmp/hadoop-appuser/dfs/data/current'}
> 2011-06-17 20:01:45,799 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
> of 3600000msec Initial delay: 0msec
> 2011-06-17 20:01:45,828 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
> processed in 11 msecs
> 2011-06-17 20:01:45,833 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
> scanner.
> 2011-06-17 20:56:02,945 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
> processed in 1 msecs
> 2011-06-17 21:56:02,248 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
> processed in 1 msecs
>
>
> On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz < <ml...@uci.cu>
> mlortiz@uci.cu> wrote:
>
>> **
>> On 06/17/2011 07:41 AM, Lemon Cheng wrote:
>>
>> Hi,
>>
>>  I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type
>> "hadoop dfs -ls".
>> However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01",
>> the error is shown as follow.
>> I have searched the related problem in the web, but i can't find a
>> solution for helping me to solve this problem.
>> Anyone can give suggestion?
>> Many Thanks.
>>
>>
>>
>>  11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block:
>> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block
>> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
>> nodes contain current block
>> 11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block:
>> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block
>> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
>> nodes contain current block
>> 11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block:
>> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block
>> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
>> nodes contain current block
>> 11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException:
>> Could not obtain block: blk_7095683278339921538_1029
>> file=/usr/lemon/wordcount/input/file01
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>         at java.io.DataInputStream.read(DataInputStream.java:83)
>>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
>>         at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
>>         at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
>>         at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
>>         at
>> org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
>>         at org.apache.hadoop.fs.FsShell.cat<http://org.apache.hadoop.fs.fsshell.cat/>
>> (FsShell.java:346)
>>         at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
>>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)
>>
>>
>>  Regards,
>> Lemon
>>
>> Are you sure that all your DataNodes are online?
>>
>>
>> --
>> Marcos Luís Ortíz Valmaseda
>>  Software Engineer (UCI)
>>   <http://marcosluis2186.posterous.com>http://marcosluis2186.posterous.com
>>   <http://twitter.com/marcosluis2186>http://twitter.com/marcosluis2186
>>
>>
>

Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2

Posted by Mostafa Gaber <mo...@gmail.com>.
Can you send us the output of "hadoop dfs -ls"?



On Jun 17, 2011, at 10:21, Lemon Cheng <le...@gmail.com> wrote:

> Hi,
> 
> Thanks for your reply.
> I am not sure that. How can I prove that?
> 
> I checked the localhost:50070, it shows 1 live node and 0 dead node.
> And  the log "hadoop-appuser-datanode-localhost.localdomain.log" shows:
>  
> ************************************************************/
> 2011-06-17 19:59:38,658 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-06-17 19:59:46,738 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
> 2011-06-17 19:59:46,749 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
> 2011-06-17 19:59:46,752 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
> 2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
> 2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
> 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
> 2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
> 2011-06-17 20:01:45,702 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
> 2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=DataNode, port=50020
> 2011-06-17 20:01:45,751 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(localhost.localdomain:50010, storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075, ipcPort=50020)
> 2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
> 2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
> 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
> 2011-06-17 20:01:45,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop-appuser/dfs/data/current'}
> 2011-06-17 20:01:45,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
> 2011-06-17 20:01:45,828 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 11 msecs
> 2011-06-17 20:01:45,833 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
> 2011-06-17 20:56:02,945 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 1 msecs
> 2011-06-17 21:56:02,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 1 msecs
> 
> 
> On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz <ml...@uci.cu> wrote:
> On 06/17/2011 07:41 AM, Lemon Cheng wrote:
>> 
>> Hi,
>> 
>> I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type "hadoop dfs -ls".
>> However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01", the error is shown as follow.
>> I have searched the related problem in the web, but i can't find a solution for helping me to solve this problem.
>> Anyone can give suggestion?
>> Many Thanks.
>> 
>> 
>> 
>> 11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node:  java.io.IOException: No live nodes contain current block
>> 11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node:  java.io.IOException: No live nodes contain current block
>> 11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>> 11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node:  java.io.IOException: No live nodes contain current block
>> 11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
>>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>         at java.io.DataInputStream.read(DataInputStream.java:83)
>>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
>>         at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
>>         at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
>>         at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
>>         at org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
>>         at org.apache.hadoop.fs.FsShell.cat(FsShell.java:346)
>>         at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
>>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)
>> 
>> 
>> Regards,
>> Lemon
> Are you sure that all your DataNodes are online?
> 
> 
> -- 
> Marcos Luís Ortíz Valmaseda
>  Software Engineer (UCI)
>  http://marcosluis2186.posterous.com
>  http://twitter.com/marcosluis2186
> 
> 

Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2

Posted by Lemon Cheng <le...@gmail.com>.
Hi,

Thanks for your reply.
I am not sure that. How can I prove that?

I checked the localhost:50070, it shows 1 live node and 0 dead node.
And  the log "hadoop-appuser-datanode-localhost.localdomain.log" shows:

************************************************************/
2011-06-17 19:59:38,658 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-06-17 19:59:46,738 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2011-06-17 19:59:46,749 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2011-06-17 19:59:46,752 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
Opening the listener on 50075
2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50075
2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
2011-06-17 20:01:45,702 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=DataNode, sessionId=null
2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=DataNode, port=50020
2011-06-17 20:01:45,751 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(localhost.localdomain:50010,
storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075,
ipcPort=50020)
2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2011-06-17 20:01:45,795 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
127.0.0.1:50010, storageID=DS-993704729-127.0.0.1-50010-1308296320968,
infoPort=50075, ipcPort=50020)In DataNode.run, data =
FSDataset{dirpath='/tmp/hadoop-appuser/dfs/data/current'}
2011-06-17 20:01:45,799 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
of 3600000msec Initial delay: 0msec
2011-06-17 20:01:45,828 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
processed in 11 msecs
2011-06-17 20:01:45,833 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
scanner.
2011-06-17 20:56:02,945 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
processed in 1 msecs
2011-06-17 21:56:02,248 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
processed in 1 msecs


On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz <ml...@uci.cu> wrote:

> **
> On 06/17/2011 07:41 AM, Lemon Cheng wrote:
>
> Hi,
>
>  I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type
> "hadoop dfs -ls".
> However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01",
> the error is shown as follow.
> I have searched the related problem in the web, but i can't find a solution
> for helping me to solve this problem.
> Anyone can give suggestion?
> Many Thanks.
>
>
>
>  11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block:
> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
> 11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block
> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
> nodes contain current block
> 11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block:
> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
> 11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block
> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
> nodes contain current block
> 11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block:
> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
> 11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block
> blk_7095683278339921538_1029 from any node:  java.io.IOException: No live
> nodes contain current block
> 11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could
> not obtain block: blk_7095683278339921538_1029
> file=/usr/lemon/wordcount/input/file01
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>         at java.io.DataInputStream.read(DataInputStream.java:83)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
>         at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
>         at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
>         at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
>         at
> org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
>         at org.apache.hadoop.fs.FsShell.cat<http://org.apache.hadoop.fs.fsshell.cat/>
> (FsShell.java:346)
>         at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)
>
>
>  Regards,
> Lemon
>
> Are you sure that all your DataNodes are online?
>
>
> --
> Marcos Luís Ortíz Valmaseda
>  Software Engineer (UCI)
>  http://marcosluis2186.posterous.com
>  http://twitter.com/marcosluis2186
>
>

Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2

Posted by Marcos Ortiz <ml...@uci.cu>.
On 06/17/2011 07:41 AM, Lemon Cheng wrote:
> Hi,
>
> I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type 
> "hadoop dfs -ls".
> However, when i type "hadoop dfs -cat 
> /usr/lemon/wordcount/input/file01", the error is shown as follow.
> I have searched the related problem in the web, but i can't find a 
> solution for helping me to solve this problem.
> Anyone can give suggestion?
> Many Thanks.
>
>
>
> 11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block: 
> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
> 11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block 
> blk_7095683278339921538_1029 from any node:  java.io.IOException: No 
> live nodes contain current block
> 11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block: 
> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
> 11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block 
> blk_7095683278339921538_1029 from any node:  java.io.IOException: No 
> live nodes contain current block
> 11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block: 
> blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
> 11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block 
> blk_7095683278339921538_1029 from any node:  java.io.IOException: No 
> live nodes contain current block
> 11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: 
> Could not obtain block: blk_7095683278339921538_1029 
> file=/usr/lemon/wordcount/input/file01
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>         at java.io.DataInputStream.read(DataInputStream.java:83)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
>         at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
>         at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
>         at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
>         at 
> org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
>         at org.apache.hadoop.fs.FsShell.cat 
> <http://org.apache.hadoop.fs.fsshell.cat/>(FsShell.java:346)
>         at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)
>
>
> Regards,
> Lemon
Are you sure that all your DataNodes are online?


-- 
Marcos Luís Ortíz Valmaseda
  Software Engineer (UCI)
  http://marcosluis2186.posterous.com
  http://twitter.com/marcosluis2186