You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Peng Yu <pe...@gmail.com> on 2013/06/26 16:53:23 UTC
Can not follow Single Node Setup example.
Hi,
http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
I followed the above instructions. But I get the following errors.
Does anybody know what is wrong? Thanks.
~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
Warning: $HADOOP_HOME is deprecated.
13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to process : 2
13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
13/06/26 09:49:14 ERROR security.UserGroupInformation:
PriviledgedActionException as:py cause:java.io.IOException: Not a
file: hdfs://localhost:9000/user/py/input/conf
java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
at org.apache.hadoop.examples.Grep.run(Grep.java:69)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.Grep.main(Grep.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Use this :
bin/hadoop fs -ls /
You can change ls with whatever command you like(cat, put etc etc).
That link shows all the HDFS shell commands. I'm sorry, my blog was about
setup, not shell commands.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 1:28 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> > Once you are comfortable with the configuration you could proceed with
> some
> > shell exercises and then move on to MapReduce. See this link for shell
> > commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
>
> http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
>
> I want to start with the simplest command ls. But I don't find where
> the command hdfs is. Would you please show me where it is using the
> configurations in your blog that I followed? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Use this :
bin/hadoop fs -ls /
You can change ls with whatever command you like(cat, put etc etc).
That link shows all the HDFS shell commands. I'm sorry, my blog was about
setup, not shell commands.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 1:28 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> > Once you are comfortable with the configuration you could proceed with
> some
> > shell exercises and then move on to MapReduce. See this link for shell
> > commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
>
> http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
>
> I want to start with the simplest command ls. But I don't find where
> the command hdfs is. Would you please show me where it is using the
> configurations in your blog that I followed? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Use this :
bin/hadoop fs -ls /
You can change ls with whatever command you like(cat, put etc etc).
That link shows all the HDFS shell commands. I'm sorry, my blog was about
setup, not shell commands.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 1:28 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> > Once you are comfortable with the configuration you could proceed with
> some
> > shell exercises and then move on to MapReduce. See this link for shell
> > commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
>
> http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
>
> I want to start with the simplest command ls. But I don't find where
> the command hdfs is. Would you please show me where it is using the
> configurations in your blog that I followed? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Use this :
bin/hadoop fs -ls /
You can change ls with whatever command you like(cat, put etc etc).
That link shows all the HDFS shell commands. I'm sorry, my blog was about
setup, not shell commands.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 1:28 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> > Once you are comfortable with the configuration you could proceed with
> some
> > shell exercises and then move on to MapReduce. See this link for shell
> > commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
>
> http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
>
> I want to start with the simplest command ls. But I don't find where
> the command hdfs is. Would you please show me where it is using the
> configurations in your blog that I followed? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> Once you are comfortable with the configuration you could proceed with some
> shell exercises and then move on to MapReduce. See this link for shell
> commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
I want to start with the simplest command ls. But I don't find where
the command hdfs is. Would you please show me where it is using the
configurations in your blog that I followed? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> Once you are comfortable with the configuration you could proceed with some
> shell exercises and then move on to MapReduce. See this link for shell
> commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
I want to start with the simplest command ls. But I don't find where
the command hdfs is. Would you please show me where it is using the
configurations in your blog that I followed? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> Once you are comfortable with the configuration you could proceed with some
> shell exercises and then move on to MapReduce. See this link for shell
> commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
I want to start with the simplest command ls. But I don't find where
the command hdfs is. Would you please show me where it is using the
configurations in your blog that I followed? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> Once you are comfortable with the configuration you could proceed with some
> shell exercises and then move on to MapReduce. See this link for shell
> commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
http://hadoop.apache.org/docs/stable/file_system_shell.html#ls
I want to start with the simplest command ls. But I don't find where
the command hdfs is. Would you please show me where it is using the
configurations in your blog that I followed? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Cool..Happy for you.
It's advisable to use bin/start-dfs.sh and bin/start-mapred.sh as
bin/start-all.sh
has been deprecated.
Once you are comfortable with the configuration you could proceed with some
shell exercises and then move on to MapReduce. See this link for shell
commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 12:04 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> >
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
>
> Thanks. I have followed everything on the above link. And they worked.
>
> Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
> bin/start-all.sh? After I setting up hadoop, would you please point me
> some working tutorials on how to use it? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Cool..Happy for you.
It's advisable to use bin/start-dfs.sh and bin/start-mapred.sh as
bin/start-all.sh
has been deprecated.
Once you are comfortable with the configuration you could proceed with some
shell exercises and then move on to MapReduce. See this link for shell
commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 12:04 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> >
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
>
> Thanks. I have followed everything on the above link. And they worked.
>
> Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
> bin/start-all.sh? After I setting up hadoop, would you please point me
> some working tutorials on how to use it? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Cool..Happy for you.
It's advisable to use bin/start-dfs.sh and bin/start-mapred.sh as
bin/start-all.sh
has been deprecated.
Once you are comfortable with the configuration you could proceed with some
shell exercises and then move on to MapReduce. See this link for shell
commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 12:04 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> >
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
>
> Thanks. I have followed everything on the above link. And they worked.
>
> Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
> bin/start-all.sh? After I setting up hadoop, would you please point me
> some working tutorials on how to use it? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Cool..Happy for you.
It's advisable to use bin/start-dfs.sh and bin/start-mapred.sh as
bin/start-all.sh
has been deprecated.
Once you are comfortable with the configuration you could proceed with some
shell exercises and then move on to MapReduce. See this link for shell
commands : http://hadoop.apache.org/docs/stable/file_system_shell.html
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Jun 28, 2013 at 12:04 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi Tariq,
>
> >
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
>
> Thanks. I have followed everything on the above link. And they worked.
>
> Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
> bin/start-all.sh? After I setting up hadoop, would you please point me
> some working tutorials on how to use it? Thanks.
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
Thanks. I have followed everything on the above link. And they worked.
Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
bin/start-all.sh? After I setting up hadoop, would you please point me
some working tutorials on how to use it? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
Thanks. I have followed everything on the above link. And they worked.
Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
bin/start-all.sh? After I setting up hadoop, would you please point me
some working tutorials on how to use it? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
Thanks. I have followed everything on the above link. And they worked.
Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
bin/start-all.sh? After I setting up hadoop, would you please point me
some working tutorials on how to use it? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi Tariq,
> http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
Thanks. I have followed everything on the above link. And they worked.
Are bin/start-dfs.sh and bin/start-mapred.sh equivalent to
bin/start-all.sh? After I setting up hadoop, would you please point me
some working tutorials on how to use it? Thanks.
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Not an issue. See, there are 2 types of modes when you say "single node
setup" : standalone(runs on your local FS) and pseudo distributed(runs on
HDFS). You are probably working on standalone setup. If you need some help
on pseudo setup you might this link helpful :
http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
I have tried to explain the procedure.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:41 PM, Peng Yu <pe...@gmail.com> wrote:
> I just started learning hadoop. And I followed
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
> DataNode mentioned in this document? Do you have a list of working
> step by step instructions so that I run hadoop without anything
> previously installed? Thanks.
>
> On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > Is your DataNode running?
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here is what I got. Is there anything wrong?
> >>
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> >> bad datanode[0] nodes == null
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> >> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> >> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> >> could only be replicated to 0 nodes, instead of 1
> >> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> >> /input/conf/capacity-scheduler.xml
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> >> wrote:
> >> > No. This means that you are trying to copy an entire directory instead
> >> > of a
> >> > file. Do this :
> >> > bin/hadoop fs -put conf/ /input/
> >> >
> >> > Warm Regards,
> >> > Tariq
> >> > cloudfront.blogspot.com
> >> >
> >> >
> >> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> >> put: Target input/conf is a directory
> >> >>
> >> >> I get the above output. Is it the correct output? Thanks.
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > It is looking for a file within your login folder
> >> >> > /user/py/input/conf
> >> >> >
> >> >> > You are running your job form
> >> >> > hadoop/bin
> >> >> > and I think the hadoop job will is looking for files in the current
> >> >> > folder.
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi,
> >> >> >>
> >> >> >> Here are what I have.
> >> >> >>
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml
> logs
> >> >> >> src
> >> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> >> hadoop-minicluster-1.1.2.jar input lib
> sbin
> >> >> >> webapps
> >> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> >> hadoop-test-1.1.2.jar ivy libexec
> share
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> >> mapred-site.xml
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
> >> >> >> <sh...@gmail.com>
> >> >> >> wrote:
> >> >> >> > Basically whether this step worked or not:
> >> >> >> >
> >> >> >> > $ cp conf/*.xml input
> >> >> >> >
> >> >> >> > Regards,
> >> >> >> > Shahab
> >> >> >> >
> >> >> >> >
> >> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> >> > <sh...@gmail.com>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> >> >> >> >> (singel
> >> >> >> >> node
> >> >> >> >> setup) that you are job needs?
> >> >> >> >>
> >> >> >> >> Regards,
> >> >> >> >> Shahab
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pengyu.ut@gmail.com
> >
> >> >> >> >> wrote:
> >> >> >> >>>
> >> >> >> >>> Hi,
> >> >> >> >>>
> >> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >> >>>
> >> >> >> >>> I followed the above instructions. But I get the following
> >> >> >> >>> errors.
> >> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >> >>>
> >> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >> >>>
> >> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >> >>> classes
> >> >> >> >>> where applicable
> >> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native
> library
> >> >> >> >>> not
> >> >> >> >>> loaded
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input
> paths
> >> >> >> >>> to
> >> >> >> >>> process : 2
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the
> staging
> >> >> >> >>> area
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException:
> Not
> >> >> >> >>> a
> >> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> java.io.IOException: Not a file:
> >> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >> >>> Method)
> >> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >> >>> at
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >> >>>
> >> >> >> >>> --
> >> >> >> >>> Regards,
> >> >> >> >>> Peng
> >> >> >> >>
> >> >> >> >>
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> --
> >> >> >> Regards,
> >> >> >> Peng
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Not an issue. See, there are 2 types of modes when you say "single node
setup" : standalone(runs on your local FS) and pseudo distributed(runs on
HDFS). You are probably working on standalone setup. If you need some help
on pseudo setup you might this link helpful :
http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
I have tried to explain the procedure.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:41 PM, Peng Yu <pe...@gmail.com> wrote:
> I just started learning hadoop. And I followed
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
> DataNode mentioned in this document? Do you have a list of working
> step by step instructions so that I run hadoop without anything
> previously installed? Thanks.
>
> On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > Is your DataNode running?
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here is what I got. Is there anything wrong?
> >>
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> >> bad datanode[0] nodes == null
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> >> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> >> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> >> could only be replicated to 0 nodes, instead of 1
> >> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> >> /input/conf/capacity-scheduler.xml
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> >> wrote:
> >> > No. This means that you are trying to copy an entire directory instead
> >> > of a
> >> > file. Do this :
> >> > bin/hadoop fs -put conf/ /input/
> >> >
> >> > Warm Regards,
> >> > Tariq
> >> > cloudfront.blogspot.com
> >> >
> >> >
> >> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> >> put: Target input/conf is a directory
> >> >>
> >> >> I get the above output. Is it the correct output? Thanks.
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > It is looking for a file within your login folder
> >> >> > /user/py/input/conf
> >> >> >
> >> >> > You are running your job form
> >> >> > hadoop/bin
> >> >> > and I think the hadoop job will is looking for files in the current
> >> >> > folder.
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi,
> >> >> >>
> >> >> >> Here are what I have.
> >> >> >>
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml
> logs
> >> >> >> src
> >> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> >> hadoop-minicluster-1.1.2.jar input lib
> sbin
> >> >> >> webapps
> >> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> >> hadoop-test-1.1.2.jar ivy libexec
> share
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> >> mapred-site.xml
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
> >> >> >> <sh...@gmail.com>
> >> >> >> wrote:
> >> >> >> > Basically whether this step worked or not:
> >> >> >> >
> >> >> >> > $ cp conf/*.xml input
> >> >> >> >
> >> >> >> > Regards,
> >> >> >> > Shahab
> >> >> >> >
> >> >> >> >
> >> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> >> > <sh...@gmail.com>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> >> >> >> >> (singel
> >> >> >> >> node
> >> >> >> >> setup) that you are job needs?
> >> >> >> >>
> >> >> >> >> Regards,
> >> >> >> >> Shahab
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pengyu.ut@gmail.com
> >
> >> >> >> >> wrote:
> >> >> >> >>>
> >> >> >> >>> Hi,
> >> >> >> >>>
> >> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >> >>>
> >> >> >> >>> I followed the above instructions. But I get the following
> >> >> >> >>> errors.
> >> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >> >>>
> >> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >> >>>
> >> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >> >>> classes
> >> >> >> >>> where applicable
> >> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native
> library
> >> >> >> >>> not
> >> >> >> >>> loaded
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input
> paths
> >> >> >> >>> to
> >> >> >> >>> process : 2
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the
> staging
> >> >> >> >>> area
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException:
> Not
> >> >> >> >>> a
> >> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> java.io.IOException: Not a file:
> >> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >> >>> Method)
> >> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >> >>> at
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >> >>>
> >> >> >> >>> --
> >> >> >> >>> Regards,
> >> >> >> >>> Peng
> >> >> >> >>
> >> >> >> >>
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> --
> >> >> >> Regards,
> >> >> >> Peng
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Not an issue. See, there are 2 types of modes when you say "single node
setup" : standalone(runs on your local FS) and pseudo distributed(runs on
HDFS). You are probably working on standalone setup. If you need some help
on pseudo setup you might this link helpful :
http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
I have tried to explain the procedure.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:41 PM, Peng Yu <pe...@gmail.com> wrote:
> I just started learning hadoop. And I followed
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
> DataNode mentioned in this document? Do you have a list of working
> step by step instructions so that I run hadoop without anything
> previously installed? Thanks.
>
> On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > Is your DataNode running?
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here is what I got. Is there anything wrong?
> >>
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> >> bad datanode[0] nodes == null
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> >> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> >> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> >> could only be replicated to 0 nodes, instead of 1
> >> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> >> /input/conf/capacity-scheduler.xml
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> >> wrote:
> >> > No. This means that you are trying to copy an entire directory instead
> >> > of a
> >> > file. Do this :
> >> > bin/hadoop fs -put conf/ /input/
> >> >
> >> > Warm Regards,
> >> > Tariq
> >> > cloudfront.blogspot.com
> >> >
> >> >
> >> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> >> put: Target input/conf is a directory
> >> >>
> >> >> I get the above output. Is it the correct output? Thanks.
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > It is looking for a file within your login folder
> >> >> > /user/py/input/conf
> >> >> >
> >> >> > You are running your job form
> >> >> > hadoop/bin
> >> >> > and I think the hadoop job will is looking for files in the current
> >> >> > folder.
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi,
> >> >> >>
> >> >> >> Here are what I have.
> >> >> >>
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml
> logs
> >> >> >> src
> >> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> >> hadoop-minicluster-1.1.2.jar input lib
> sbin
> >> >> >> webapps
> >> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> >> hadoop-test-1.1.2.jar ivy libexec
> share
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> >> mapred-site.xml
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
> >> >> >> <sh...@gmail.com>
> >> >> >> wrote:
> >> >> >> > Basically whether this step worked or not:
> >> >> >> >
> >> >> >> > $ cp conf/*.xml input
> >> >> >> >
> >> >> >> > Regards,
> >> >> >> > Shahab
> >> >> >> >
> >> >> >> >
> >> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> >> > <sh...@gmail.com>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> >> >> >> >> (singel
> >> >> >> >> node
> >> >> >> >> setup) that you are job needs?
> >> >> >> >>
> >> >> >> >> Regards,
> >> >> >> >> Shahab
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pengyu.ut@gmail.com
> >
> >> >> >> >> wrote:
> >> >> >> >>>
> >> >> >> >>> Hi,
> >> >> >> >>>
> >> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >> >>>
> >> >> >> >>> I followed the above instructions. But I get the following
> >> >> >> >>> errors.
> >> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >> >>>
> >> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >> >>>
> >> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >> >>> classes
> >> >> >> >>> where applicable
> >> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native
> library
> >> >> >> >>> not
> >> >> >> >>> loaded
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input
> paths
> >> >> >> >>> to
> >> >> >> >>> process : 2
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the
> staging
> >> >> >> >>> area
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException:
> Not
> >> >> >> >>> a
> >> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> java.io.IOException: Not a file:
> >> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >> >>> Method)
> >> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >> >>> at
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >> >>>
> >> >> >> >>> --
> >> >> >> >>> Regards,
> >> >> >> >>> Peng
> >> >> >> >>
> >> >> >> >>
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> --
> >> >> >> Regards,
> >> >> >> Peng
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Not an issue. See, there are 2 types of modes when you say "single node
setup" : standalone(runs on your local FS) and pseudo distributed(runs on
HDFS). You are probably working on standalone setup. If you need some help
on pseudo setup you might this link helpful :
http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UcyBE0AW38s
I have tried to explain the procedure.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:41 PM, Peng Yu <pe...@gmail.com> wrote:
> I just started learning hadoop. And I followed
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
> DataNode mentioned in this document? Do you have a list of working
> step by step instructions so that I run hadoop without anything
> previously installed? Thanks.
>
> On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > Is your DataNode running?
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here is what I got. Is there anything wrong?
> >>
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> >> bad datanode[0] nodes == null
> >> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> >> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> >> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> >> could only be replicated to 0 nodes, instead of 1
> >> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> >> /input/conf/capacity-scheduler.xml
> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >> /input/conf/capacity-scheduler.xml could only be replicated to 0
> >> nodes, instead of 1
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> >> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
> >>
> >> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> at java.lang.reflect.Method.invoke(Method.java:597)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> >> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> >> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> >> at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
> >>
> >> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> >> wrote:
> >> > No. This means that you are trying to copy an entire directory instead
> >> > of a
> >> > file. Do this :
> >> > bin/hadoop fs -put conf/ /input/
> >> >
> >> > Warm Regards,
> >> > Tariq
> >> > cloudfront.blogspot.com
> >> >
> >> >
> >> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> >> put: Target input/conf is a directory
> >> >>
> >> >> I get the above output. Is it the correct output? Thanks.
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > It is looking for a file within your login folder
> >> >> > /user/py/input/conf
> >> >> >
> >> >> > You are running your job form
> >> >> > hadoop/bin
> >> >> > and I think the hadoop job will is looking for files in the current
> >> >> > folder.
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi,
> >> >> >>
> >> >> >> Here are what I have.
> >> >> >>
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml
> logs
> >> >> >> src
> >> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> >> hadoop-minicluster-1.1.2.jar input lib
> sbin
> >> >> >> webapps
> >> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> >> hadoop-test-1.1.2.jar ivy libexec
> share
> >> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> >> mapred-site.xml
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
> >> >> >> <sh...@gmail.com>
> >> >> >> wrote:
> >> >> >> > Basically whether this step worked or not:
> >> >> >> >
> >> >> >> > $ cp conf/*.xml input
> >> >> >> >
> >> >> >> > Regards,
> >> >> >> > Shahab
> >> >> >> >
> >> >> >> >
> >> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> >> > <sh...@gmail.com>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> >> >> >> >> (singel
> >> >> >> >> node
> >> >> >> >> setup) that you are job needs?
> >> >> >> >>
> >> >> >> >> Regards,
> >> >> >> >> Shahab
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pengyu.ut@gmail.com
> >
> >> >> >> >> wrote:
> >> >> >> >>>
> >> >> >> >>> Hi,
> >> >> >> >>>
> >> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >> >>>
> >> >> >> >>> I followed the above instructions. But I get the following
> >> >> >> >>> errors.
> >> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >> >>>
> >> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >> >>>
> >> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >> >>> classes
> >> >> >> >>> where applicable
> >> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native
> library
> >> >> >> >>> not
> >> >> >> >>> loaded
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input
> paths
> >> >> >> >>> to
> >> >> >> >>> process : 2
> >> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the
> staging
> >> >> >> >>> area
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException:
> Not
> >> >> >> >>> a
> >> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> java.io.IOException: Not a file:
> >> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >> >>> Method)
> >> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >> >>> at
> >> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >> >>> at
> >> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >> >>> Method)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >> >>> at
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >> >>>
> >> >> >> >>> --
> >> >> >> >>> Regards,
> >> >> >> >>> Peng
> >> >> >> >>
> >> >> >> >>
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> --
> >> >> >> Regards,
> >> >> >> Peng
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
I just started learning hadoop. And I followed
http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
DataNode mentioned in this document? Do you have a list of working
step by step instructions so that I run hadoop without anything
previously installed? Thanks.
On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Is your DataNode running?
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here is what I got. Is there anything wrong?
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
>> bad datanode[0] nodes == null
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
>> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
>> could only be replicated to 0 nodes, instead of 1
>> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
>> /input/conf/capacity-scheduler.xml
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > No. This means that you are trying to copy an entire directory instead
>> > of a
>> > file. Do this :
>> > bin/hadoop fs -put conf/ /input/
>> >
>> > Warm Regards,
>> > Tariq
>> > cloudfront.blogspot.com
>> >
>> >
>> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> >> put: Target input/conf is a directory
>> >>
>> >> I get the above output. Is it the correct output? Thanks.
>> >>
>> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > It is looking for a file within your login folder
>> >> > /user/py/input/conf
>> >> >
>> >> > You are running your job form
>> >> > hadoop/bin
>> >> > and I think the hadoop job will is looking for files in the current
>> >> > folder.
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> Here are what I have.
>> >> >>
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> >> src
>> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> >> webapps
>> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> >> mapred-site.xml
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
>> >> >> <sh...@gmail.com>
>> >> >> wrote:
>> >> >> > Basically whether this step worked or not:
>> >> >> >
>> >> >> > $ cp conf/*.xml input
>> >> >> >
>> >> >> > Regards,
>> >> >> > Shahab
>> >> >> >
>> >> >> >
>> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> >> > <sh...@gmail.com>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> Have you verified that the 'input' folder exists on the hdfs
>> >> >> >> (singel
>> >> >> >> node
>> >> >> >> setup) that you are job needs?
>> >> >> >>
>> >> >> >> Regards,
>> >> >> >> Shahab
>> >> >> >>
>> >> >> >>
>> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>> Hi,
>> >> >> >>>
>> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >> >>>
>> >> >> >>> I followed the above instructions. But I get the following
>> >> >> >>> errors.
>> >> >> >>> Does anybody know what is wrong? Thanks.
>> >> >> >>>
>> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >> >>>
>> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >> >>> classes
>> >> >> >>> where applicable
>> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
>> >> >> >>> not
>> >> >> >>> loaded
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
>> >> >> >>> to
>> >> >> >>> process : 2
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >> >>> area
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not
>> >> >> >>> a
>> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> java.io.IOException: Not a file:
>> >> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >> >>> Method)
>> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Regards,
>> >> >> >>> Peng
>> >> >> >>
>> >> >> >>
>> >> >> >
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Regards,
>> >> >> Peng
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
I just started learning hadoop. And I followed
http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
DataNode mentioned in this document? Do you have a list of working
step by step instructions so that I run hadoop without anything
previously installed? Thanks.
On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Is your DataNode running?
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here is what I got. Is there anything wrong?
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
>> bad datanode[0] nodes == null
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
>> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
>> could only be replicated to 0 nodes, instead of 1
>> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
>> /input/conf/capacity-scheduler.xml
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > No. This means that you are trying to copy an entire directory instead
>> > of a
>> > file. Do this :
>> > bin/hadoop fs -put conf/ /input/
>> >
>> > Warm Regards,
>> > Tariq
>> > cloudfront.blogspot.com
>> >
>> >
>> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> >> put: Target input/conf is a directory
>> >>
>> >> I get the above output. Is it the correct output? Thanks.
>> >>
>> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > It is looking for a file within your login folder
>> >> > /user/py/input/conf
>> >> >
>> >> > You are running your job form
>> >> > hadoop/bin
>> >> > and I think the hadoop job will is looking for files in the current
>> >> > folder.
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> Here are what I have.
>> >> >>
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> >> src
>> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> >> webapps
>> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> >> mapred-site.xml
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
>> >> >> <sh...@gmail.com>
>> >> >> wrote:
>> >> >> > Basically whether this step worked or not:
>> >> >> >
>> >> >> > $ cp conf/*.xml input
>> >> >> >
>> >> >> > Regards,
>> >> >> > Shahab
>> >> >> >
>> >> >> >
>> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> >> > <sh...@gmail.com>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> Have you verified that the 'input' folder exists on the hdfs
>> >> >> >> (singel
>> >> >> >> node
>> >> >> >> setup) that you are job needs?
>> >> >> >>
>> >> >> >> Regards,
>> >> >> >> Shahab
>> >> >> >>
>> >> >> >>
>> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>> Hi,
>> >> >> >>>
>> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >> >>>
>> >> >> >>> I followed the above instructions. But I get the following
>> >> >> >>> errors.
>> >> >> >>> Does anybody know what is wrong? Thanks.
>> >> >> >>>
>> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >> >>>
>> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >> >>> classes
>> >> >> >>> where applicable
>> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
>> >> >> >>> not
>> >> >> >>> loaded
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
>> >> >> >>> to
>> >> >> >>> process : 2
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >> >>> area
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not
>> >> >> >>> a
>> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> java.io.IOException: Not a file:
>> >> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >> >>> Method)
>> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Regards,
>> >> >> >>> Peng
>> >> >> >>
>> >> >> >>
>> >> >> >
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Regards,
>> >> >> Peng
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
I just started learning hadoop. And I followed
http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
DataNode mentioned in this document? Do you have a list of working
step by step instructions so that I run hadoop without anything
previously installed? Thanks.
On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Is your DataNode running?
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here is what I got. Is there anything wrong?
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
>> bad datanode[0] nodes == null
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
>> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
>> could only be replicated to 0 nodes, instead of 1
>> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
>> /input/conf/capacity-scheduler.xml
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > No. This means that you are trying to copy an entire directory instead
>> > of a
>> > file. Do this :
>> > bin/hadoop fs -put conf/ /input/
>> >
>> > Warm Regards,
>> > Tariq
>> > cloudfront.blogspot.com
>> >
>> >
>> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> >> put: Target input/conf is a directory
>> >>
>> >> I get the above output. Is it the correct output? Thanks.
>> >>
>> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > It is looking for a file within your login folder
>> >> > /user/py/input/conf
>> >> >
>> >> > You are running your job form
>> >> > hadoop/bin
>> >> > and I think the hadoop job will is looking for files in the current
>> >> > folder.
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> Here are what I have.
>> >> >>
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> >> src
>> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> >> webapps
>> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> >> mapred-site.xml
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
>> >> >> <sh...@gmail.com>
>> >> >> wrote:
>> >> >> > Basically whether this step worked or not:
>> >> >> >
>> >> >> > $ cp conf/*.xml input
>> >> >> >
>> >> >> > Regards,
>> >> >> > Shahab
>> >> >> >
>> >> >> >
>> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> >> > <sh...@gmail.com>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> Have you verified that the 'input' folder exists on the hdfs
>> >> >> >> (singel
>> >> >> >> node
>> >> >> >> setup) that you are job needs?
>> >> >> >>
>> >> >> >> Regards,
>> >> >> >> Shahab
>> >> >> >>
>> >> >> >>
>> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>> Hi,
>> >> >> >>>
>> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >> >>>
>> >> >> >>> I followed the above instructions. But I get the following
>> >> >> >>> errors.
>> >> >> >>> Does anybody know what is wrong? Thanks.
>> >> >> >>>
>> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >> >>>
>> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >> >>> classes
>> >> >> >>> where applicable
>> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
>> >> >> >>> not
>> >> >> >>> loaded
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
>> >> >> >>> to
>> >> >> >>> process : 2
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >> >>> area
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not
>> >> >> >>> a
>> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> java.io.IOException: Not a file:
>> >> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >> >>> Method)
>> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Regards,
>> >> >> >>> Peng
>> >> >> >>
>> >> >> >>
>> >> >> >
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Regards,
>> >> >> Peng
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
I just started learning hadoop. And I followed
http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html. Is
DataNode mentioned in this document? Do you have a list of working
step by step instructions so that I run hadoop without anything
previously installed? Thanks.
On Thu, Jun 27, 2013 at 1:00 PM, Mohammad Tariq <do...@gmail.com> wrote:
> Is your DataNode running?
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here is what I got. Is there anything wrong?
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
>> bad datanode[0] nodes == null
>> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
>> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
>> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
>> could only be replicated to 0 nodes, instead of 1
>> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
>> /input/conf/capacity-scheduler.xml
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /input/conf/capacity-scheduler.xml could only be replicated to 0
>> nodes, instead of 1
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
>> at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>>
>> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
>> wrote:
>> > No. This means that you are trying to copy an entire directory instead
>> > of a
>> > file. Do this :
>> > bin/hadoop fs -put conf/ /input/
>> >
>> > Warm Regards,
>> > Tariq
>> > cloudfront.blogspot.com
>> >
>> >
>> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> >> put: Target input/conf is a directory
>> >>
>> >> I get the above output. Is it the correct output? Thanks.
>> >>
>> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > It is looking for a file within your login folder
>> >> > /user/py/input/conf
>> >> >
>> >> > You are running your job form
>> >> > hadoop/bin
>> >> > and I think the hadoop job will is looking for files in the current
>> >> > folder.
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> Here are what I have.
>> >> >>
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> >> src
>> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> >> webapps
>> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> >> mapred-site.xml
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus
>> >> >> <sh...@gmail.com>
>> >> >> wrote:
>> >> >> > Basically whether this step worked or not:
>> >> >> >
>> >> >> > $ cp conf/*.xml input
>> >> >> >
>> >> >> > Regards,
>> >> >> > Shahab
>> >> >> >
>> >> >> >
>> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> >> > <sh...@gmail.com>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> Have you verified that the 'input' folder exists on the hdfs
>> >> >> >> (singel
>> >> >> >> node
>> >> >> >> setup) that you are job needs?
>> >> >> >>
>> >> >> >> Regards,
>> >> >> >> Shahab
>> >> >> >>
>> >> >> >>
>> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>> Hi,
>> >> >> >>>
>> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >> >>>
>> >> >> >>> I followed the above instructions. But I get the following
>> >> >> >>> errors.
>> >> >> >>> Does anybody know what is wrong? Thanks.
>> >> >> >>>
>> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >> >>>
>> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >> >>> classes
>> >> >> >>> where applicable
>> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
>> >> >> >>> not
>> >> >> >>> loaded
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
>> >> >> >>> to
>> >> >> >>> process : 2
>> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >> >>> area
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not
>> >> >> >>> a
>> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> java.io.IOException: Not a file:
>> >> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >> >>> Method)
>> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >> >>> at
>> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >> >>> Method)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >> >>> at
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Regards,
>> >> >> >>> Peng
>> >> >> >>
>> >> >> >>
>> >> >> >
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Regards,
>> >> >> Peng
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Is your DataNode running?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here is what I got. Is there anything wrong?
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> bad datanode[0] nodes == null
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> could only be replicated to 0 nodes, instead of 1
> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> /input/conf/capacity-scheduler.xml
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > No. This means that you are trying to copy an entire directory instead
> of a
> > file. Do this :
> > bin/hadoop fs -put conf/ /input/
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> put: Target input/conf is a directory
> >>
> >> I get the above output. Is it the correct output? Thanks.
> >>
> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > It is looking for a file within your login folder
> >> > /user/py/input/conf
> >> >
> >> > You are running your job form
> >> > hadoop/bin
> >> > and I think the hadoop job will is looking for files in the current
> >> > folder.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> Here are what I have.
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> >> src
> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> >> webapps
> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> hadoop-test-1.1.2.jar ivy libexec share
> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> mapred-site.xml
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > Basically whether this step worked or not:
> >> >> >
> >> >> > $ cp conf/*.xml input
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> > <sh...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> (singel
> >> >> >> node
> >> >> >> setup) that you are job needs?
> >> >> >>
> >> >> >> Regards,
> >> >> >> Shahab
> >> >> >>
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> >> >> >> wrote:
> >> >> >>>
> >> >> >>> Hi,
> >> >> >>>
> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >>>
> >> >> >>> I followed the above instructions. But I get the following
> errors.
> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >>>
> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >>>
> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >>> classes
> >> >> >>> where applicable
> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
> not
> >> >> >>> loaded
> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
> to
> >> >> >>> process : 2
> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> >> >> >>> area
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >>> java.io.IOException: Not a file:
> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >>> Method)
> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >>>
> >> >> >>> --
> >> >> >>> Regards,
> >> >> >>> Peng
> >> >> >>
> >> >> >>
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Is your DataNode running?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here is what I got. Is there anything wrong?
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> bad datanode[0] nodes == null
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> could only be replicated to 0 nodes, instead of 1
> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> /input/conf/capacity-scheduler.xml
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > No. This means that you are trying to copy an entire directory instead
> of a
> > file. Do this :
> > bin/hadoop fs -put conf/ /input/
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> put: Target input/conf is a directory
> >>
> >> I get the above output. Is it the correct output? Thanks.
> >>
> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > It is looking for a file within your login folder
> >> > /user/py/input/conf
> >> >
> >> > You are running your job form
> >> > hadoop/bin
> >> > and I think the hadoop job will is looking for files in the current
> >> > folder.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> Here are what I have.
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> >> src
> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> >> webapps
> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> hadoop-test-1.1.2.jar ivy libexec share
> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> mapred-site.xml
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > Basically whether this step worked or not:
> >> >> >
> >> >> > $ cp conf/*.xml input
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> > <sh...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> (singel
> >> >> >> node
> >> >> >> setup) that you are job needs?
> >> >> >>
> >> >> >> Regards,
> >> >> >> Shahab
> >> >> >>
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> >> >> >> wrote:
> >> >> >>>
> >> >> >>> Hi,
> >> >> >>>
> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >>>
> >> >> >>> I followed the above instructions. But I get the following
> errors.
> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >>>
> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >>>
> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >>> classes
> >> >> >>> where applicable
> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
> not
> >> >> >>> loaded
> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
> to
> >> >> >>> process : 2
> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> >> >> >>> area
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >>> java.io.IOException: Not a file:
> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >>> Method)
> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >>>
> >> >> >>> --
> >> >> >>> Regards,
> >> >> >>> Peng
> >> >> >>
> >> >> >>
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Is your DataNode running?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here is what I got. Is there anything wrong?
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> bad datanode[0] nodes == null
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> could only be replicated to 0 nodes, instead of 1
> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> /input/conf/capacity-scheduler.xml
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > No. This means that you are trying to copy an entire directory instead
> of a
> > file. Do this :
> > bin/hadoop fs -put conf/ /input/
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> put: Target input/conf is a directory
> >>
> >> I get the above output. Is it the correct output? Thanks.
> >>
> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > It is looking for a file within your login folder
> >> > /user/py/input/conf
> >> >
> >> > You are running your job form
> >> > hadoop/bin
> >> > and I think the hadoop job will is looking for files in the current
> >> > folder.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> Here are what I have.
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> >> src
> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> >> webapps
> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> hadoop-test-1.1.2.jar ivy libexec share
> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> mapred-site.xml
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > Basically whether this step worked or not:
> >> >> >
> >> >> > $ cp conf/*.xml input
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> > <sh...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> (singel
> >> >> >> node
> >> >> >> setup) that you are job needs?
> >> >> >>
> >> >> >> Regards,
> >> >> >> Shahab
> >> >> >>
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> >> >> >> wrote:
> >> >> >>>
> >> >> >>> Hi,
> >> >> >>>
> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >>>
> >> >> >>> I followed the above instructions. But I get the following
> errors.
> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >>>
> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >>>
> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >>> classes
> >> >> >>> where applicable
> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
> not
> >> >> >>> loaded
> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
> to
> >> >> >>> process : 2
> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> >> >> >>> area
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >>> java.io.IOException: Not a file:
> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >>> Method)
> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >>>
> >> >> >>> --
> >> >> >>> Regards,
> >> >> >>> Peng
> >> >> >>
> >> >> >>
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
Is your DataNode running?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:24 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here is what I got. Is there anything wrong?
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
> 13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
> bad datanode[0] nodes == null
> 13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/input/conf/capacity-scheduler.xml" - Aborting...
> put: java.io.IOException: File /input/conf/capacity-scheduler.xml
> could only be replicated to 0 nodes, instead of 1
> 13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
> /input/conf/capacity-scheduler.xml
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /input/conf/capacity-scheduler.xml could only be replicated to 0
> nodes, instead of 1
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1107)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
> at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
>
> On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com>
> wrote:
> > No. This means that you are trying to copy an entire directory instead
> of a
> > file. Do this :
> > bin/hadoop fs -put conf/ /input/
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> >> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> >> put: Target input/conf is a directory
> >>
> >> I get the above output. Is it the correct output? Thanks.
> >>
> >> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > It is looking for a file within your login folder
> >> > /user/py/input/conf
> >> >
> >> > You are running your job form
> >> > hadoop/bin
> >> > and I think the hadoop job will is looking for files in the current
> >> > folder.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> Here are what I have.
> >> >>
> >> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> >> src
> >> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> >> webapps
> >> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> >> hadoop-test-1.1.2.jar ivy libexec share
> >> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> >> mapred-site.xml
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> >> wrote:
> >> >> > Basically whether this step worked or not:
> >> >> >
> >> >> > $ cp conf/*.xml input
> >> >> >
> >> >> > Regards,
> >> >> > Shahab
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
> >> >> > <sh...@gmail.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Have you verified that the 'input' folder exists on the hdfs
> (singel
> >> >> >> node
> >> >> >> setup) that you are job needs?
> >> >> >>
> >> >> >> Regards,
> >> >> >> Shahab
> >> >> >>
> >> >> >>
> >> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> >> >> >> wrote:
> >> >> >>>
> >> >> >>> Hi,
> >> >> >>>
> >> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >> >>>
> >> >> >>> I followed the above instructions. But I get the following
> errors.
> >> >> >>> Does anybody know what is wrong? Thanks.
> >> >> >>>
> >> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >> >>>
> >> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >> >>> native-hadoop library for your platform... using builtin-java
> >> >> >>> classes
> >> >> >>> where applicable
> >> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library
> not
> >> >> >>> loaded
> >> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths
> to
> >> >> >>> process : 2
> >> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> >> >> >>> area
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >> >>> java.io.IOException: Not a file:
> >> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >> >>> at java.security.AccessController.doPrivileged(Native
> >> >> >>> Method)
> >> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >> >>> at
> >> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >> >>> at
> >> >> >>>
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >> >>> Method)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >> >>> at
> >> >> >>>
> >> >> >>>
> >> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >> >>>
> >> >> >>> --
> >> >> >>> Regards,
> >> >> >>> Peng
> >> >> >>
> >> >> >>
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Peng
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here is what I got. Is there anything wrong?
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
Source file "/input/conf/capacity-scheduler.xml" - Aborting...
put: java.io.IOException: File /input/conf/capacity-scheduler.xml
could only be replicated to 0 nodes, instead of 1
13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
/input/conf/capacity-scheduler.xml
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com> wrote:
> No. This means that you are trying to copy an entire directory instead of a
> file. Do this :
> bin/hadoop fs -put conf/ /input/
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> put: Target input/conf is a directory
>>
>> I get the above output. Is it the correct output? Thanks.
>>
>> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > It is looking for a file within your login folder
>> > /user/py/input/conf
>> >
>> > You are running your job form
>> > hadoop/bin
>> > and I think the hadoop job will is looking for files in the current
>> > folder.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> Here are what I have.
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> src
>> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> webapps
>> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> mapred-site.xml
>> >>
>> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > Basically whether this step worked or not:
>> >> >
>> >> > $ cp conf/*.xml input
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> > <sh...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> >> node
>> >> >> setup) that you are job needs?
>> >> >>
>> >> >> Regards,
>> >> >> Shahab
>> >> >>
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi,
>> >> >>>
>> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >>>
>> >> >>> I followed the above instructions. But I get the following errors.
>> >> >>> Does anybody know what is wrong? Thanks.
>> >> >>>
>> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >>>
>> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >>> classes
>> >> >>> where applicable
>> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >> >>> loaded
>> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >> >>> process : 2
>> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >>> area
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >>> java.io.IOException: Not a file:
>> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >>> Method)
>> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >>>
>> >> >>> --
>> >> >>> Regards,
>> >> >>> Peng
>> >> >>
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here is what I got. Is there anything wrong?
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
Source file "/input/conf/capacity-scheduler.xml" - Aborting...
put: java.io.IOException: File /input/conf/capacity-scheduler.xml
could only be replicated to 0 nodes, instead of 1
13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
/input/conf/capacity-scheduler.xml
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com> wrote:
> No. This means that you are trying to copy an entire directory instead of a
> file. Do this :
> bin/hadoop fs -put conf/ /input/
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> put: Target input/conf is a directory
>>
>> I get the above output. Is it the correct output? Thanks.
>>
>> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > It is looking for a file within your login folder
>> > /user/py/input/conf
>> >
>> > You are running your job form
>> > hadoop/bin
>> > and I think the hadoop job will is looking for files in the current
>> > folder.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> Here are what I have.
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> src
>> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> webapps
>> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> mapred-site.xml
>> >>
>> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > Basically whether this step worked or not:
>> >> >
>> >> > $ cp conf/*.xml input
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> > <sh...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> >> node
>> >> >> setup) that you are job needs?
>> >> >>
>> >> >> Regards,
>> >> >> Shahab
>> >> >>
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi,
>> >> >>>
>> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >>>
>> >> >>> I followed the above instructions. But I get the following errors.
>> >> >>> Does anybody know what is wrong? Thanks.
>> >> >>>
>> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >>>
>> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >>> classes
>> >> >>> where applicable
>> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >> >>> loaded
>> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >> >>> process : 2
>> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >>> area
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >>> java.io.IOException: Not a file:
>> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >>> Method)
>> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >>>
>> >> >>> --
>> >> >>> Regards,
>> >> >>> Peng
>> >> >>
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here is what I got. Is there anything wrong?
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
Source file "/input/conf/capacity-scheduler.xml" - Aborting...
put: java.io.IOException: File /input/conf/capacity-scheduler.xml
could only be replicated to 0 nodes, instead of 1
13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
/input/conf/capacity-scheduler.xml
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com> wrote:
> No. This means that you are trying to copy an entire directory instead of a
> file. Do this :
> bin/hadoop fs -put conf/ /input/
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> put: Target input/conf is a directory
>>
>> I get the above output. Is it the correct output? Thanks.
>>
>> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > It is looking for a file within your login folder
>> > /user/py/input/conf
>> >
>> > You are running your job form
>> > hadoop/bin
>> > and I think the hadoop job will is looking for files in the current
>> > folder.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> Here are what I have.
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> src
>> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> webapps
>> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> mapred-site.xml
>> >>
>> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > Basically whether this step worked or not:
>> >> >
>> >> > $ cp conf/*.xml input
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> > <sh...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> >> node
>> >> >> setup) that you are job needs?
>> >> >>
>> >> >> Regards,
>> >> >> Shahab
>> >> >>
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi,
>> >> >>>
>> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >>>
>> >> >>> I followed the above instructions. But I get the following errors.
>> >> >>> Does anybody know what is wrong? Thanks.
>> >> >>>
>> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >>>
>> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >>> classes
>> >> >>> where applicable
>> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >> >>> loaded
>> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >> >>> process : 2
>> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >>> area
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >>> java.io.IOException: Not a file:
>> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >>> Method)
>> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >>>
>> >> >>> --
>> >> >>> Regards,
>> >> >>> Peng
>> >> >>
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here is what I got. Is there anything wrong?
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf/ /input/
13/06/27 12:53:39 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
13/06/27 12:53:39 WARN hdfs.DFSClient: Error Recovery for block null
bad datanode[0] nodes == null
13/06/27 12:53:39 WARN hdfs.DFSClient: Could not get block locations.
Source file "/input/conf/capacity-scheduler.xml" - Aborting...
put: java.io.IOException: File /input/conf/capacity-scheduler.xml
could only be replicated to 0 nodes, instead of 1
13/06/27 12:53:39 ERROR hdfs.DFSClient: Failed to close file
/input/conf/capacity-scheduler.xml
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/input/conf/capacity-scheduler.xml could only be replicated to 0
nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
On Thu, Jun 27, 2013 at 12:40 PM, Mohammad Tariq <do...@gmail.com> wrote:
> No. This means that you are trying to copy an entire directory instead of a
> file. Do this :
> bin/hadoop fs -put conf/ /input/
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
>> put: Target input/conf is a directory
>>
>> I get the above output. Is it the correct output? Thanks.
>>
>> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > It is looking for a file within your login folder
>> > /user/py/input/conf
>> >
>> > You are running your job form
>> > hadoop/bin
>> > and I think the hadoop job will is looking for files in the current
>> > folder.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>
>> >> Hi,
>> >>
>> >> Here are what I have.
>> >>
>> >> ~/Downloads/hadoop-install/hadoop$ ls
>> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> >> src
>> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> >> hadoop-minicluster-1.1.2.jar input lib sbin
>> >> webapps
>> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> >> hadoop-test-1.1.2.jar ivy libexec share
>> >> ~/Downloads/hadoop-install/hadoop$ ls input/
>> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> >> mapred-site.xml
>> >>
>> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> >> wrote:
>> >> > Basically whether this step worked or not:
>> >> >
>> >> > $ cp conf/*.xml input
>> >> >
>> >> > Regards,
>> >> > Shahab
>> >> >
>> >> >
>> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus
>> >> > <sh...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> >> node
>> >> >> setup) that you are job needs?
>> >> >>
>> >> >> Regards,
>> >> >> Shahab
>> >> >>
>> >> >>
>> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi,
>> >> >>>
>> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >> >>>
>> >> >>> I followed the above instructions. But I get the following errors.
>> >> >>> Does anybody know what is wrong? Thanks.
>> >> >>>
>> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >> >>> Warning: $HADOOP_HOME is deprecated.
>> >> >>>
>> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >> >>> native-hadoop library for your platform... using builtin-java
>> >> >>> classes
>> >> >>> where applicable
>> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >> >>> loaded
>> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >> >>> process : 2
>> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
>> >> >>> area
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >> >>> java.io.IOException: Not a file:
>> >> >>> hdfs://localhost:9000/user/py/input/conf
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >> >>> at java.security.AccessController.doPrivileged(Native
>> >> >>> Method)
>> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >> >>> at
>> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >> >>> at
>> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >> >>> at
>> >> >>>
>> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>> Method)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >> >>> at
>> >> >>>
>> >> >>>
>> >> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> >>>
>> >> >>> --
>> >> >>> Regards,
>> >> >>> Peng
>> >> >>
>> >> >>
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Peng
>> >
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
No. This means that you are trying to copy an entire directory instead of a
file. Do this :
bin/hadoop fs -put conf/ /input/
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> put: Target input/conf is a directory
>
> I get the above output. Is it the correct output? Thanks.
>
> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > It is looking for a file within your login folder
> > /user/py/input/conf
> >
> > You are running your job form
> > hadoop/bin
> > and I think the hadoop job will is looking for files in the current
> folder.
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here are what I have.
> >>
> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> src
> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> webapps
> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> hadoop-test-1.1.2.jar ivy libexec share
> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> mapred-site.xml
> >>
> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > Basically whether this step worked or not:
> >> >
> >> > $ cp conf/*.xml input
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> > wrote:
> >> >>
> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
> >> >> node
> >> >> setup) that you are job needs?
> >> >>
> >> >> Regards,
> >> >> Shahab
> >> >>
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>>
> >> >>> Hi,
> >> >>>
> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >>>
> >> >>> I followed the above instructions. But I get the following errors.
> >> >>> Does anybody know what is wrong? Thanks.
> >> >>>
> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >>>
> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >>> native-hadoop library for your platform... using builtin-java
> classes
> >> >>> where applicable
> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >> >>> loaded
> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >> >>> process : 2
> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> area
> >> >>>
> >> >>>
> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >>> java.io.IOException: Not a file:
> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >>> at
> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >>> at java.security.AccessController.doPrivileged(Native
> Method)
> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >>> at
> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >>> at
> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >>>
> >> >>> --
> >> >>> Regards,
> >> >>> Peng
> >> >>
> >> >>
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
No. This means that you are trying to copy an entire directory instead of a
file. Do this :
bin/hadoop fs -put conf/ /input/
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> put: Target input/conf is a directory
>
> I get the above output. Is it the correct output? Thanks.
>
> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > It is looking for a file within your login folder
> > /user/py/input/conf
> >
> > You are running your job form
> > hadoop/bin
> > and I think the hadoop job will is looking for files in the current
> folder.
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here are what I have.
> >>
> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> src
> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> webapps
> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> hadoop-test-1.1.2.jar ivy libexec share
> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> mapred-site.xml
> >>
> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > Basically whether this step worked or not:
> >> >
> >> > $ cp conf/*.xml input
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> > wrote:
> >> >>
> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
> >> >> node
> >> >> setup) that you are job needs?
> >> >>
> >> >> Regards,
> >> >> Shahab
> >> >>
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>>
> >> >>> Hi,
> >> >>>
> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >>>
> >> >>> I followed the above instructions. But I get the following errors.
> >> >>> Does anybody know what is wrong? Thanks.
> >> >>>
> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >>>
> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >>> native-hadoop library for your platform... using builtin-java
> classes
> >> >>> where applicable
> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >> >>> loaded
> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >> >>> process : 2
> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> area
> >> >>>
> >> >>>
> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >>> java.io.IOException: Not a file:
> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >>> at
> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >>> at java.security.AccessController.doPrivileged(Native
> Method)
> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >>> at
> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >>> at
> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >>>
> >> >>> --
> >> >>> Regards,
> >> >>> Peng
> >> >>
> >> >>
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
No. This means that you are trying to copy an entire directory instead of a
file. Do this :
bin/hadoop fs -put conf/ /input/
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> put: Target input/conf is a directory
>
> I get the above output. Is it the correct output? Thanks.
>
> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > It is looking for a file within your login folder
> > /user/py/input/conf
> >
> > You are running your job form
> > hadoop/bin
> > and I think the hadoop job will is looking for files in the current
> folder.
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here are what I have.
> >>
> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> src
> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> webapps
> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> hadoop-test-1.1.2.jar ivy libexec share
> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> mapred-site.xml
> >>
> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > Basically whether this step worked or not:
> >> >
> >> > $ cp conf/*.xml input
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> > wrote:
> >> >>
> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
> >> >> node
> >> >> setup) that you are job needs?
> >> >>
> >> >> Regards,
> >> >> Shahab
> >> >>
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>>
> >> >>> Hi,
> >> >>>
> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >>>
> >> >>> I followed the above instructions. But I get the following errors.
> >> >>> Does anybody know what is wrong? Thanks.
> >> >>>
> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >>>
> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >>> native-hadoop library for your platform... using builtin-java
> classes
> >> >>> where applicable
> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >> >>> loaded
> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >> >>> process : 2
> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> area
> >> >>>
> >> >>>
> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >>> java.io.IOException: Not a file:
> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >>> at
> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >>> at java.security.AccessController.doPrivileged(Native
> Method)
> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >>> at
> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >>> at
> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >>>
> >> >>> --
> >> >>> Regards,
> >> >>> Peng
> >> >>
> >> >>
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Mohammad Tariq <do...@gmail.com>.
No. This means that you are trying to copy an entire directory instead of a
file. Do this :
bin/hadoop fs -put conf/ /input/
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 10:37 PM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> ~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
> put: Target input/conf is a directory
>
> I get the above output. Is it the correct output? Thanks.
>
> On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > It is looking for a file within your login folder
> > /user/py/input/conf
> >
> > You are running your job form
> > hadoop/bin
> > and I think the hadoop job will is looking for files in the current
> folder.
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> Here are what I have.
> >>
> >> ~/Downloads/hadoop-install/hadoop$ ls
> >> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> >> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> >> src
> >> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> >> hadoop-minicluster-1.1.2.jar input lib sbin
> >> webapps
> >> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> >> hadoop-test-1.1.2.jar ivy libexec share
> >> ~/Downloads/hadoop-install/hadoop$ ls input/
> >> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> >> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> >> mapred-site.xml
> >>
> >> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> >> wrote:
> >> > Basically whether this step worked or not:
> >> >
> >> > $ cp conf/*.xml input
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <
> shahab.yunus@gmail.com>
> >> > wrote:
> >> >>
> >> >> Have you verified that the 'input' folder exists on the hdfs (singel
> >> >> node
> >> >> setup) that you are job needs?
> >> >>
> >> >> Regards,
> >> >> Shahab
> >> >>
> >> >>
> >> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com>
> wrote:
> >> >>>
> >> >>> Hi,
> >> >>>
> >> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >> >>>
> >> >>> I followed the above instructions. But I get the following errors.
> >> >>> Does anybody know what is wrong? Thanks.
> >> >>>
> >> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >> >>> Warning: $HADOOP_HOME is deprecated.
> >> >>>
> >> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >> >>> native-hadoop library for your platform... using builtin-java
> classes
> >> >>> where applicable
> >> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >> >>> loaded
> >> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >> >>> process : 2
> >> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging
> area
> >> >>>
> >> >>>
> >> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >> >>> file: hdfs://localhost:9000/user/py/input/conf
> >> >>> java.io.IOException: Not a file:
> >> >>> hdfs://localhost:9000/user/py/input/conf
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >> >>> at
> >> >>>
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >> >>> at java.security.AccessController.doPrivileged(Native
> Method)
> >> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >> >>> at
> >> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at
> >> >>>
> >> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >> >>> at
> >> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >> >>> at
> >> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> >>> at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> >>>
> >> >>> --
> >> >>> Regards,
> >> >>> Peng
> >> >>
> >> >>
> >> >
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Peng
> >
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
put: Target input/conf is a directory
I get the above output. Is it the correct output? Thanks.
On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com> wrote:
> It is looking for a file within your login folder
> /user/py/input/conf
>
> You are running your job form
> hadoop/bin
> and I think the hadoop job will is looking for files in the current folder.
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here are what I have.
>>
>> ~/Downloads/hadoop-install/hadoop$ ls
>> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> src
>> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> hadoop-minicluster-1.1.2.jar input lib sbin
>> webapps
>> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> hadoop-test-1.1.2.jar ivy libexec share
>> ~/Downloads/hadoop-install/hadoop$ ls input/
>> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> mapred-site.xml
>>
>> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > Basically whether this step worked or not:
>> >
>> > $ cp conf/*.xml input
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> >>
>> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> node
>> >> setup) that you are job needs?
>> >>
>> >> Regards,
>> >> Shahab
>> >>
>> >>
>> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >>>
>> >>> I followed the above instructions. But I get the following errors.
>> >>> Does anybody know what is wrong? Thanks.
>> >>>
>> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >>> Warning: $HADOOP_HOME is deprecated.
>> >>>
>> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >>> native-hadoop library for your platform... using builtin-java classes
>> >>> where applicable
>> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >>> loaded
>> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >>> process : 2
>> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>> >>>
>> >>>
>> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >>> java.io.IOException: Not a file:
>> >>> hdfs://localhost:9000/user/py/input/conf
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >>> at
>> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >>> at
>> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >>>
>> >>> --
>> >>> Regards,
>> >>> Peng
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
put: Target input/conf is a directory
I get the above output. Is it the correct output? Thanks.
On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com> wrote:
> It is looking for a file within your login folder
> /user/py/input/conf
>
> You are running your job form
> hadoop/bin
> and I think the hadoop job will is looking for files in the current folder.
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here are what I have.
>>
>> ~/Downloads/hadoop-install/hadoop$ ls
>> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> src
>> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> hadoop-minicluster-1.1.2.jar input lib sbin
>> webapps
>> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> hadoop-test-1.1.2.jar ivy libexec share
>> ~/Downloads/hadoop-install/hadoop$ ls input/
>> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> mapred-site.xml
>>
>> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > Basically whether this step worked or not:
>> >
>> > $ cp conf/*.xml input
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> >>
>> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> node
>> >> setup) that you are job needs?
>> >>
>> >> Regards,
>> >> Shahab
>> >>
>> >>
>> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >>>
>> >>> I followed the above instructions. But I get the following errors.
>> >>> Does anybody know what is wrong? Thanks.
>> >>>
>> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >>> Warning: $HADOOP_HOME is deprecated.
>> >>>
>> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >>> native-hadoop library for your platform... using builtin-java classes
>> >>> where applicable
>> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >>> loaded
>> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >>> process : 2
>> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>> >>>
>> >>>
>> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >>> java.io.IOException: Not a file:
>> >>> hdfs://localhost:9000/user/py/input/conf
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >>> at
>> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >>> at
>> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >>>
>> >>> --
>> >>> Regards,
>> >>> Peng
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
put: Target input/conf is a directory
I get the above output. Is it the correct output? Thanks.
On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com> wrote:
> It is looking for a file within your login folder
> /user/py/input/conf
>
> You are running your job form
> hadoop/bin
> and I think the hadoop job will is looking for files in the current folder.
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here are what I have.
>>
>> ~/Downloads/hadoop-install/hadoop$ ls
>> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> src
>> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> hadoop-minicluster-1.1.2.jar input lib sbin
>> webapps
>> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> hadoop-test-1.1.2.jar ivy libexec share
>> ~/Downloads/hadoop-install/hadoop$ ls input/
>> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> mapred-site.xml
>>
>> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > Basically whether this step worked or not:
>> >
>> > $ cp conf/*.xml input
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> >>
>> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> node
>> >> setup) that you are job needs?
>> >>
>> >> Regards,
>> >> Shahab
>> >>
>> >>
>> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >>>
>> >>> I followed the above instructions. But I get the following errors.
>> >>> Does anybody know what is wrong? Thanks.
>> >>>
>> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >>> Warning: $HADOOP_HOME is deprecated.
>> >>>
>> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >>> native-hadoop library for your platform... using builtin-java classes
>> >>> where applicable
>> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >>> loaded
>> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >>> process : 2
>> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>> >>>
>> >>>
>> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >>> java.io.IOException: Not a file:
>> >>> hdfs://localhost:9000/user/py/input/conf
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >>> at
>> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >>> at
>> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >>>
>> >>> --
>> >>> Regards,
>> >>> Peng
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
~/Downloads/hadoop-install/hadoop$ rm -rf ~/input/conf/
~/Downloads/hadoop-install/hadoop$ bin/hadoop fs -put conf input
put: Target input/conf is a directory
I get the above output. Is it the correct output? Thanks.
On Wed, Jun 26, 2013 at 10:51 AM, Shahab Yunus <sh...@gmail.com> wrote:
> It is looking for a file within your login folder
> /user/py/input/conf
>
> You are running your job form
> hadoop/bin
> and I think the hadoop job will is looking for files in the current folder.
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
>>
>> Hi,
>>
>> Here are what I have.
>>
>> ~/Downloads/hadoop-install/hadoop$ ls
>> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
>> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
>> src
>> LICENSE.txt bin conf hadoop-client-1.1.2.jar
>> hadoop-minicluster-1.1.2.jar input lib sbin
>> webapps
>> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
>> hadoop-test-1.1.2.jar ivy libexec share
>> ~/Downloads/hadoop-install/hadoop$ ls input/
>> capacity-scheduler.xml core-site.xml fair-scheduler.xml
>> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
>> mapred-site.xml
>>
>> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > Basically whether this step worked or not:
>> >
>> > $ cp conf/*.xml input
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> >>
>> >> Have you verified that the 'input' folder exists on the hdfs (singel
>> >> node
>> >> setup) that you are job needs?
>> >>
>> >> Regards,
>> >> Shahab
>> >>
>> >>
>> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>> >>>
>> >>> I followed the above instructions. But I get the following errors.
>> >>> Does anybody know what is wrong? Thanks.
>> >>>
>> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> >>> Warning: $HADOOP_HOME is deprecated.
>> >>>
>> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> >>> native-hadoop library for your platform... using builtin-java classes
>> >>> where applicable
>> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>> >>> loaded
>> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> >>> process : 2
>> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>> >>>
>> >>>
>> >>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> >>> file: hdfs://localhost:9000/user/py/input/conf
>> >>> java.io.IOException: Not a file:
>> >>> hdfs://localhost:9000/user/py/input/conf
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> >>> at
>> >>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at
>> >>>
>> >>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> >>> at
>> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> >>> at
>> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>> at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>> at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>> at java.lang.reflect.Method.invoke(Method.java:597)
>> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >>>
>> >>> --
>> >>> Regards,
>> >>> Peng
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Regards,
>> Peng
>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
It is looking for a file within your login folder
/user/*py*/input/conf
You are running your job form
hadoop/bin
and I think the hadoop job will is looking for files in the current folder.
Regards,
Shahab
On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here are what I have.
>
> ~/Downloads/hadoop-install/hadoop$ ls
> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> src
> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> hadoop-minicluster-1.1.2.jar input lib sbin
> webapps
> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> hadoop-test-1.1.2.jar ivy libexec share
> ~/Downloads/hadoop-install/hadoop$ ls input/
> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> mapred-site.xml
>
> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > Basically whether this step worked or not:
> >
> > $ cp conf/*.xml input
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> > wrote:
> >>
> >> Have you verified that the 'input' folder exists on the hdfs (singel
> node
> >> setup) that you are job needs?
> >>
> >> Regards,
> >> Shahab
> >>
> >>
> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >>>
> >>> I followed the above instructions. But I get the following errors.
> >>> Does anybody know what is wrong? Thanks.
> >>>
> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >>> Warning: $HADOOP_HOME is deprecated.
> >>>
> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >>> native-hadoop library for your platform... using builtin-java classes
> >>> where applicable
> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >>> loaded
> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >>> process : 2
> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
> >>>
> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >>> file: hdfs://localhost:9000/user/py/input/conf
> >>> java.io.IOException: Not a file:
> hdfs://localhost:9000/user/py/input/conf
> >>> at
> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >>> at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >>> at
> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >>> at
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at
> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >>> at
> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >>> at
> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >>>
> >>> --
> >>> Regards,
> >>> Peng
> >>
> >>
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
It is looking for a file within your login folder
/user/*py*/input/conf
You are running your job form
hadoop/bin
and I think the hadoop job will is looking for files in the current folder.
Regards,
Shahab
On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here are what I have.
>
> ~/Downloads/hadoop-install/hadoop$ ls
> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> src
> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> hadoop-minicluster-1.1.2.jar input lib sbin
> webapps
> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> hadoop-test-1.1.2.jar ivy libexec share
> ~/Downloads/hadoop-install/hadoop$ ls input/
> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> mapred-site.xml
>
> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > Basically whether this step worked or not:
> >
> > $ cp conf/*.xml input
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> > wrote:
> >>
> >> Have you verified that the 'input' folder exists on the hdfs (singel
> node
> >> setup) that you are job needs?
> >>
> >> Regards,
> >> Shahab
> >>
> >>
> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >>>
> >>> I followed the above instructions. But I get the following errors.
> >>> Does anybody know what is wrong? Thanks.
> >>>
> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >>> Warning: $HADOOP_HOME is deprecated.
> >>>
> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >>> native-hadoop library for your platform... using builtin-java classes
> >>> where applicable
> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >>> loaded
> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >>> process : 2
> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
> >>>
> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >>> file: hdfs://localhost:9000/user/py/input/conf
> >>> java.io.IOException: Not a file:
> hdfs://localhost:9000/user/py/input/conf
> >>> at
> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >>> at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >>> at
> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >>> at
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at
> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >>> at
> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >>> at
> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >>>
> >>> --
> >>> Regards,
> >>> Peng
> >>
> >>
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
It is looking for a file within your login folder
/user/*py*/input/conf
You are running your job form
hadoop/bin
and I think the hadoop job will is looking for files in the current folder.
Regards,
Shahab
On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here are what I have.
>
> ~/Downloads/hadoop-install/hadoop$ ls
> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> src
> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> hadoop-minicluster-1.1.2.jar input lib sbin
> webapps
> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> hadoop-test-1.1.2.jar ivy libexec share
> ~/Downloads/hadoop-install/hadoop$ ls input/
> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> mapred-site.xml
>
> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > Basically whether this step worked or not:
> >
> > $ cp conf/*.xml input
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> > wrote:
> >>
> >> Have you verified that the 'input' folder exists on the hdfs (singel
> node
> >> setup) that you are job needs?
> >>
> >> Regards,
> >> Shahab
> >>
> >>
> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >>>
> >>> I followed the above instructions. But I get the following errors.
> >>> Does anybody know what is wrong? Thanks.
> >>>
> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >>> Warning: $HADOOP_HOME is deprecated.
> >>>
> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >>> native-hadoop library for your platform... using builtin-java classes
> >>> where applicable
> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >>> loaded
> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >>> process : 2
> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
> >>>
> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >>> file: hdfs://localhost:9000/user/py/input/conf
> >>> java.io.IOException: Not a file:
> hdfs://localhost:9000/user/py/input/conf
> >>> at
> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >>> at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >>> at
> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >>> at
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at
> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >>> at
> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >>> at
> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >>>
> >>> --
> >>> Regards,
> >>> Peng
> >>
> >>
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
It is looking for a file within your login folder
/user/*py*/input/conf
You are running your job form
hadoop/bin
and I think the hadoop job will is looking for files in the current folder.
Regards,
Shahab
On Wed, Jun 26, 2013 at 11:02 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> Here are what I have.
>
> ~/Downloads/hadoop-install/hadoop$ ls
> CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
> hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
> src
> LICENSE.txt bin conf hadoop-client-1.1.2.jar
> hadoop-minicluster-1.1.2.jar input lib sbin
> webapps
> NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
> hadoop-test-1.1.2.jar ivy libexec share
> ~/Downloads/hadoop-install/hadoop$ ls input/
> capacity-scheduler.xml core-site.xml fair-scheduler.xml
> hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
> mapred-site.xml
>
> On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > Basically whether this step worked or not:
> >
> > $ cp conf/*.xml input
> >
> > Regards,
> > Shahab
> >
> >
> > On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> > wrote:
> >>
> >> Have you verified that the 'input' folder exists on the hdfs (singel
> node
> >> setup) that you are job needs?
> >>
> >> Regards,
> >> Shahab
> >>
> >>
> >> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
> >>>
> >>> I followed the above instructions. But I get the following errors.
> >>> Does anybody know what is wrong? Thanks.
> >>>
> >>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> >>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> >>> Warning: $HADOOP_HOME is deprecated.
> >>>
> >>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> >>> native-hadoop library for your platform... using builtin-java classes
> >>> where applicable
> >>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
> >>> loaded
> >>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> >>> process : 2
> >>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
> >>>
> >>>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> >>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> >>> PriviledgedActionException as:py cause:java.io.IOException: Not a
> >>> file: hdfs://localhost:9000/user/py/input/conf
> >>> java.io.IOException: Not a file:
> hdfs://localhost:9000/user/py/input/conf
> >>> at
> >>>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> >>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>> at javax.security.auth.Subject.doAs(Subject.java:396)
> >>> at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> >>> at
> >>>
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> >>> at
> >>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> >>> at
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> >>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> >>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at
> >>>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> >>> at
> >>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> >>> at
> >>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>> at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >>>
> >>> --
> >>> Regards,
> >>> Peng
> >>
> >>
> >
>
>
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here are what I have.
~/Downloads/hadoop-install/hadoop$ ls
CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
src
LICENSE.txt bin conf hadoop-client-1.1.2.jar
hadoop-minicluster-1.1.2.jar input lib sbin
webapps
NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
hadoop-test-1.1.2.jar ivy libexec share
~/Downloads/hadoop-install/hadoop$ ls input/
capacity-scheduler.xml core-site.xml fair-scheduler.xml
hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
mapred-site.xml
On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com> wrote:
> Basically whether this step worked or not:
>
> $ cp conf/*.xml input
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
>>
>> Have you verified that the 'input' folder exists on the hdfs (singel node
>> setup) that you are job needs?
>>
>> Regards,
>> Shahab
>>
>>
>> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>>
>>> I followed the above instructions. But I get the following errors.
>>> Does anybody know what is wrong? Thanks.
>>>
>>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>>> loaded
>>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>>> process : 2
>>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>>
>>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>>> file: hdfs://localhost:9000/user/py/input/conf
>>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>>> at
>>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>>> at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>> at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>> at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>
>>> --
>>> Regards,
>>> Peng
>>
>>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here are what I have.
~/Downloads/hadoop-install/hadoop$ ls
CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
src
LICENSE.txt bin conf hadoop-client-1.1.2.jar
hadoop-minicluster-1.1.2.jar input lib sbin
webapps
NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
hadoop-test-1.1.2.jar ivy libexec share
~/Downloads/hadoop-install/hadoop$ ls input/
capacity-scheduler.xml core-site.xml fair-scheduler.xml
hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
mapred-site.xml
On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com> wrote:
> Basically whether this step worked or not:
>
> $ cp conf/*.xml input
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
>>
>> Have you verified that the 'input' folder exists on the hdfs (singel node
>> setup) that you are job needs?
>>
>> Regards,
>> Shahab
>>
>>
>> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>>
>>> I followed the above instructions. But I get the following errors.
>>> Does anybody know what is wrong? Thanks.
>>>
>>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>>> loaded
>>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>>> process : 2
>>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>>
>>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>>> file: hdfs://localhost:9000/user/py/input/conf
>>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>>> at
>>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>>> at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>> at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>> at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>
>>> --
>>> Regards,
>>> Peng
>>
>>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here are what I have.
~/Downloads/hadoop-install/hadoop$ ls
CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
src
LICENSE.txt bin conf hadoop-client-1.1.2.jar
hadoop-minicluster-1.1.2.jar input lib sbin
webapps
NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
hadoop-test-1.1.2.jar ivy libexec share
~/Downloads/hadoop-install/hadoop$ ls input/
capacity-scheduler.xml core-site.xml fair-scheduler.xml
hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
mapred-site.xml
On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com> wrote:
> Basically whether this step worked or not:
>
> $ cp conf/*.xml input
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
>>
>> Have you verified that the 'input' folder exists on the hdfs (singel node
>> setup) that you are job needs?
>>
>> Regards,
>> Shahab
>>
>>
>> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>>
>>> I followed the above instructions. But I get the following errors.
>>> Does anybody know what is wrong? Thanks.
>>>
>>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>>> loaded
>>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>>> process : 2
>>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>>
>>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>>> file: hdfs://localhost:9000/user/py/input/conf
>>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>>> at
>>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>>> at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>> at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>> at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>
>>> --
>>> Regards,
>>> Peng
>>
>>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Peng Yu <pe...@gmail.com>.
Hi,
Here are what I have.
~/Downloads/hadoop-install/hadoop$ ls
CHANGES.txt README.txt c++ hadoop-ant-1.1.2.jar
hadoop-examples-1.1.2.jar hadoop-tools-1.1.2.jar ivy.xml logs
src
LICENSE.txt bin conf hadoop-client-1.1.2.jar
hadoop-minicluster-1.1.2.jar input lib sbin
webapps
NOTICE.txt build.xml contrib hadoop-core-1.1.2.jar
hadoop-test-1.1.2.jar ivy libexec share
~/Downloads/hadoop-install/hadoop$ ls input/
capacity-scheduler.xml core-site.xml fair-scheduler.xml
hadoop-policy.xml hdfs-site.xml mapred-queue-acls.xml
mapred-site.xml
On Wed, Jun 26, 2013 at 10:00 AM, Shahab Yunus <sh...@gmail.com> wrote:
> Basically whether this step worked or not:
>
> $ cp conf/*.xml input
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>
> wrote:
>>
>> Have you verified that the 'input' folder exists on the hdfs (singel node
>> setup) that you are job needs?
>>
>> Regards,
>> Shahab
>>
>>
>> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>>
>>> I followed the above instructions. But I get the following errors.
>>> Does anybody know what is wrong? Thanks.
>>>
>>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not
>>> loaded
>>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>>> process : 2
>>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>>
>>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>>> file: hdfs://localhost:9000/user/py/input/conf
>>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>>> at
>>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>>> at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>>> at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:396)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>> at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>> at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>
>>> --
>>> Regards,
>>> Peng
>>
>>
>
--
Regards,
Peng
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Basically whether this step worked or not:
$ cp conf/*.xml input
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>wrote:
> Have you verified that the 'input' folder exists on the hdfs (singel node
> setup) that you are job needs?
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>
>> Hi,
>>
>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>
>> I followed the above instructions. But I get the following errors.
>> Does anybody know what is wrong? Thanks.
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> process : 2
>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> file: hdfs://localhost:9000/user/py/input/conf
>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>> at
>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> at
>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>
>> --
>> Regards,
>> Peng
>>
>
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Basically whether this step worked or not:
$ cp conf/*.xml input
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>wrote:
> Have you verified that the 'input' folder exists on the hdfs (singel node
> setup) that you are job needs?
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>
>> Hi,
>>
>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>
>> I followed the above instructions. But I get the following errors.
>> Does anybody know what is wrong? Thanks.
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> process : 2
>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> file: hdfs://localhost:9000/user/py/input/conf
>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>> at
>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> at
>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>
>> --
>> Regards,
>> Peng
>>
>
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Basically whether this step worked or not:
$ cp conf/*.xml input
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>wrote:
> Have you verified that the 'input' folder exists on the hdfs (singel node
> setup) that you are job needs?
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>
>> Hi,
>>
>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>
>> I followed the above instructions. But I get the following errors.
>> Does anybody know what is wrong? Thanks.
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> process : 2
>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> file: hdfs://localhost:9000/user/py/input/conf
>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>> at
>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> at
>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>
>> --
>> Regards,
>> Peng
>>
>
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Basically whether this step worked or not:
$ cp conf/*.xml input
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:58 AM, Shahab Yunus <sh...@gmail.com>wrote:
> Have you verified that the 'input' folder exists on the hdfs (singel node
> setup) that you are job needs?
>
> Regards,
> Shahab
>
>
> On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
>
>> Hi,
>>
>> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>>
>> I followed the above instructions. But I get the following errors.
>> Does anybody know what is wrong? Thanks.
>>
>> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
>> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
>> process : 2
>> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
>> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:py cause:java.io.IOException: Not a
>> file: hdfs://localhost:9000/user/py/input/conf
>> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
>> at
>> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
>> at
>> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
>> at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
>> at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
>> at
>> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
>> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
>> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>> at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>> at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>
>> --
>> Regards,
>> Peng
>>
>
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Have you verified that the 'input' folder exists on the hdfs (singel node
setup) that you are job needs?
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>
> I followed the above instructions. But I get the following errors.
> Does anybody know what is wrong? Thanks.
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> Warning: $HADOOP_HOME is deprecated.
>
> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> process : 2
> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> PriviledgedActionException as:py cause:java.io.IOException: Not a
> file: hdfs://localhost:9000/user/py/input/conf
> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
> at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Have you verified that the 'input' folder exists on the hdfs (singel node
setup) that you are job needs?
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>
> I followed the above instructions. But I get the following errors.
> Does anybody know what is wrong? Thanks.
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> Warning: $HADOOP_HOME is deprecated.
>
> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> process : 2
> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> PriviledgedActionException as:py cause:java.io.IOException: Not a
> file: hdfs://localhost:9000/user/py/input/conf
> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
> at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Have you verified that the 'input' folder exists on the hdfs (singel node
setup) that you are job needs?
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>
> I followed the above instructions. But I get the following errors.
> Does anybody know what is wrong? Thanks.
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> Warning: $HADOOP_HOME is deprecated.
>
> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> process : 2
> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> PriviledgedActionException as:py cause:java.io.IOException: Not a
> file: hdfs://localhost:9000/user/py/input/conf
> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
> at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> --
> Regards,
> Peng
>
Re: Can not follow Single Node Setup example.
Posted by Shahab Yunus <sh...@gmail.com>.
Have you verified that the 'input' folder exists on the hdfs (singel node
setup) that you are job needs?
Regards,
Shahab
On Wed, Jun 26, 2013 at 10:53 AM, Peng Yu <pe...@gmail.com> wrote:
> Hi,
>
> http://hadoop.apache.org/docs/r1.1.2/single_node_setup.html
>
> I followed the above instructions. But I get the following errors.
> Does anybody know what is wrong? Thanks.
>
> ~/Downloads/hadoop-install/hadoop$ bin/hadoop jar
> hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> Warning: $HADOOP_HOME is deprecated.
>
> 13/06/26 09:49:14 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 13/06/26 09:49:14 WARN snappy.LoadSnappy: Snappy native library not loaded
> 13/06/26 09:49:14 INFO mapred.FileInputFormat: Total input paths to
> process : 2
> 13/06/26 09:49:14 INFO mapred.JobClient: Cleaning up the staging area
>
> hdfs://localhost:9000/opt/local/var/hadoop/cache/mapred/staging/py/.staging/job_201306260838_0001
> 13/06/26 09:49:14 ERROR security.UserGroupInformation:
> PriviledgedActionException as:py cause:java.io.IOException: Not a
> file: hdfs://localhost:9000/user/py/input/conf
> java.io.IOException: Not a file: hdfs://localhost:9000/user/py/input/conf
> at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
> at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1051)
> at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1043)
> at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:959)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1323)
> at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> --
> Regards,
> Peng
>