You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by R J <rj...@yahoo.com> on 2014/07/29 09:43:11 UTC
Create HDFS directory fails
Hi All,
I am trying to programmatically create a directory in HDFS but it fails with error.
This the part of my code:
Path hdfsFile = new Path("/user/logger/dev2/tmp2");
try {
FSDataOutputStream out = hdfs.create(hdfsFile);
}
And I get this error:
java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
at PutMerge.main(PutMerge.java:20)
I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
$hadoop fs -mkdir /user/logger/dev/tmp2
$hadoop fs -rmr /user/logger/dev/tmp2
(above works)
Here is my entire code:
------PutMerge.java------
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class PutMerge {
public static void main(String[] args) throws IOException {
Configuration conf = new Configuration();
FileSystem hdfs = FileSystem.get(conf);
FileSystem local = FileSystem.getLocal(conf);
Path inputDir = new Path("/home/tmp/test");
Path hdfsFile = new Path("/user/logger/dev/tmp2");
try {
FileStatus[] inputFiles = local.listStatus(inputDir);
FSDataOutputStream out = hdfs.create(hdfsFile);
for (int i=0; i<inputFiles.length; i++) {
System.out.println(inputFiles[i].getPath().getName());
FSDataInputStream in = local.open(inputFiles[i].getPath());
byte buffer[] = new byte[256];
int bytesRead = 0;
while( (bytesRead = in.read(buffer)) > 0) {
out.write(buffer, 0, bytesRead);
}
in.close();
}
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
------
Re: Create HDFS directory fails
Posted by Arpit Agarwal <aa...@hortonworks.com>.
FileSystem.create creates regular files. The documentation could be clearer
about this).
FileSystem.mkdirs creates directories.
On Tue, Jul 29, 2014 at 11:07 AM, R J <rj...@yahoo.com> wrote:
> Thank you.
> I tried all the following but none works:
>
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
>
> Path hdfsFile = new Path("/user/logger/dev2/one.dat");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> Path hdfsFile = new Path("/user/logger/dev2");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
> Path hdfsFile = new Path("/user/logger/dev2/");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
>
>
> On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <
> wellington.chevreuil@gmail.com> wrote:
>
>
> Hum, I'm not sure, but I think through the API, you have to create each
> folder level at a time. For instance, if your current path is
> "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to
> first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new
> Path("/user/logger/dev2/tmp2")). Have you already tried that?
>
> On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
>
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails
> with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command
> as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
>
>
>
>
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
Re: Create HDFS directory fails
Posted by Arpit Agarwal <aa...@hortonworks.com>.
FileSystem.create creates regular files. The documentation could be clearer
about this).
FileSystem.mkdirs creates directories.
On Tue, Jul 29, 2014 at 11:07 AM, R J <rj...@yahoo.com> wrote:
> Thank you.
> I tried all the following but none works:
>
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
>
> Path hdfsFile = new Path("/user/logger/dev2/one.dat");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> Path hdfsFile = new Path("/user/logger/dev2");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
> Path hdfsFile = new Path("/user/logger/dev2/");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
>
>
> On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <
> wellington.chevreuil@gmail.com> wrote:
>
>
> Hum, I'm not sure, but I think through the API, you have to create each
> folder level at a time. For instance, if your current path is
> "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to
> first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new
> Path("/user/logger/dev2/tmp2")). Have you already tried that?
>
> On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
>
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails
> with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command
> as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
>
>
>
>
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
Re: Create HDFS directory fails
Posted by Arpit Agarwal <aa...@hortonworks.com>.
FileSystem.create creates regular files. The documentation could be clearer
about this).
FileSystem.mkdirs creates directories.
On Tue, Jul 29, 2014 at 11:07 AM, R J <rj...@yahoo.com> wrote:
> Thank you.
> I tried all the following but none works:
>
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
>
> Path hdfsFile = new Path("/user/logger/dev2/one.dat");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> Path hdfsFile = new Path("/user/logger/dev2");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
> Path hdfsFile = new Path("/user/logger/dev2/");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
>
>
> On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <
> wellington.chevreuil@gmail.com> wrote:
>
>
> Hum, I'm not sure, but I think through the API, you have to create each
> folder level at a time. For instance, if your current path is
> "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to
> first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new
> Path("/user/logger/dev2/tmp2")). Have you already tried that?
>
> On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
>
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails
> with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command
> as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
>
>
>
>
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
Re: Create HDFS directory fails
Posted by Arpit Agarwal <aa...@hortonworks.com>.
FileSystem.create creates regular files. The documentation could be clearer
about this).
FileSystem.mkdirs creates directories.
On Tue, Jul 29, 2014 at 11:07 AM, R J <rj...@yahoo.com> wrote:
> Thank you.
> I tried all the following but none works:
>
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
> FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
>
> Path hdfsFile = new Path("/user/logger/dev2/one.dat");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> Path hdfsFile = new Path("/user/logger/dev2");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
> Path hdfsFile = new Path("/user/logger/dev2/");
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
>
>
>
> On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <
> wellington.chevreuil@gmail.com> wrote:
>
>
> Hum, I'm not sure, but I think through the API, you have to create each
> folder level at a time. For instance, if your current path is
> "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to
> first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new
> Path("/user/logger/dev2/tmp2")). Have you already tried that?
>
> On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
>
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails
> with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command
> as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
>
>
>
>
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
Re: Create HDFS directory fails
Posted by R J <rj...@yahoo.com>.
Thank you.
I tried all the following but none works:
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
Path hdfsFile = new Path("/user/logger/dev2/one.dat");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2/");
FSDataOutputStream out = hdfs.create(hdfsFile);
On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <we...@gmail.com> wrote:
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
Hi All,
>
>
>I am trying to programmatically create a directory in HDFS but it fails with error.
>
>
>This the part of my code:
>Path hdfsFile = new Path("/user/logger/dev2/tmp2");
>try {
>FSDataOutputStream out = hdfs.create(hdfsFile);
>}
>
>And I get this error:
>java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
>
>I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
>$hadoop fs -mkdir /user/logger/dev/tmp2
>$hadoop fs -rmr /user/logger/dev/tmp2
>(above works)
>
>
>Here is my entire code:
>------PutMerge.java------
>import java.io.IOException;
>import org.apache.hadoop.conf.Configuration;
>import
org.apache.hadoop.fs.FSDataInputStream;
>import org.apache.hadoop.fs.FSDataOutputStream;
>import org.apache.hadoop.fs.FileStatus;
>import org.apache.hadoop.fs.FileSystem;
>import org.apache.hadoop.fs.Path;
>public class PutMerge {
>
>public static void main(String[] args) throws IOException {
>Configuration conf = new Configuration();
>FileSystem hdfs = FileSystem.get(conf);
>FileSystem local = FileSystem.getLocal(conf);
>
>Path inputDir = new Path("/home/tmp/test");
>Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
>try {
>FileStatus[] inputFiles = local.listStatus(inputDir);
>FSDataOutputStream out =
hdfs.create(hdfsFile);
>
>for (int i=0; i<inputFiles.length; i++) {
>System.out.println(inputFiles[i].getPath().getName());
>FSDataInputStream in = local.open(inputFiles[i].getPath());
>byte buffer[] = new byte[256];
>int bytesRead = 0;
>while( (bytesRead = in.read(buffer)) > 0) {
>out.write(buffer, 0, bytesRead);
>}
>in.close();
>}
>out.close();
>} catch (IOException e) {
>e.printStackTrace();
>}
>}
>}
>------
>
>
>
Re: Create HDFS directory fails
Posted by R J <rj...@yahoo.com>.
Thank you.
I tried all the following but none works:
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
Path hdfsFile = new Path("/user/logger/dev2/one.dat");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2/");
FSDataOutputStream out = hdfs.create(hdfsFile);
On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <we...@gmail.com> wrote:
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
Hi All,
>
>
>I am trying to programmatically create a directory in HDFS but it fails with error.
>
>
>This the part of my code:
>Path hdfsFile = new Path("/user/logger/dev2/tmp2");
>try {
>FSDataOutputStream out = hdfs.create(hdfsFile);
>}
>
>And I get this error:
>java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
>
>I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
>$hadoop fs -mkdir /user/logger/dev/tmp2
>$hadoop fs -rmr /user/logger/dev/tmp2
>(above works)
>
>
>Here is my entire code:
>------PutMerge.java------
>import java.io.IOException;
>import org.apache.hadoop.conf.Configuration;
>import
org.apache.hadoop.fs.FSDataInputStream;
>import org.apache.hadoop.fs.FSDataOutputStream;
>import org.apache.hadoop.fs.FileStatus;
>import org.apache.hadoop.fs.FileSystem;
>import org.apache.hadoop.fs.Path;
>public class PutMerge {
>
>public static void main(String[] args) throws IOException {
>Configuration conf = new Configuration();
>FileSystem hdfs = FileSystem.get(conf);
>FileSystem local = FileSystem.getLocal(conf);
>
>Path inputDir = new Path("/home/tmp/test");
>Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
>try {
>FileStatus[] inputFiles = local.listStatus(inputDir);
>FSDataOutputStream out =
hdfs.create(hdfsFile);
>
>for (int i=0; i<inputFiles.length; i++) {
>System.out.println(inputFiles[i].getPath().getName());
>FSDataInputStream in = local.open(inputFiles[i].getPath());
>byte buffer[] = new byte[256];
>int bytesRead = 0;
>while( (bytesRead = in.read(buffer)) > 0) {
>out.write(buffer, 0, bytesRead);
>}
>in.close();
>}
>out.close();
>} catch (IOException e) {
>e.printStackTrace();
>}
>}
>}
>------
>
>
>
Re: Create HDFS directory fails
Posted by R J <rj...@yahoo.com>.
Thank you.
I tried all the following but none works:
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
Path hdfsFile = new Path("/user/logger/dev2/one.dat");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2/");
FSDataOutputStream out = hdfs.create(hdfsFile);
On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <we...@gmail.com> wrote:
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
Hi All,
>
>
>I am trying to programmatically create a directory in HDFS but it fails with error.
>
>
>This the part of my code:
>Path hdfsFile = new Path("/user/logger/dev2/tmp2");
>try {
>FSDataOutputStream out = hdfs.create(hdfsFile);
>}
>
>And I get this error:
>java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
>
>I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
>$hadoop fs -mkdir /user/logger/dev/tmp2
>$hadoop fs -rmr /user/logger/dev/tmp2
>(above works)
>
>
>Here is my entire code:
>------PutMerge.java------
>import java.io.IOException;
>import org.apache.hadoop.conf.Configuration;
>import
org.apache.hadoop.fs.FSDataInputStream;
>import org.apache.hadoop.fs.FSDataOutputStream;
>import org.apache.hadoop.fs.FileStatus;
>import org.apache.hadoop.fs.FileSystem;
>import org.apache.hadoop.fs.Path;
>public class PutMerge {
>
>public static void main(String[] args) throws IOException {
>Configuration conf = new Configuration();
>FileSystem hdfs = FileSystem.get(conf);
>FileSystem local = FileSystem.getLocal(conf);
>
>Path inputDir = new Path("/home/tmp/test");
>Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
>try {
>FileStatus[] inputFiles = local.listStatus(inputDir);
>FSDataOutputStream out =
hdfs.create(hdfsFile);
>
>for (int i=0; i<inputFiles.length; i++) {
>System.out.println(inputFiles[i].getPath().getName());
>FSDataInputStream in = local.open(inputFiles[i].getPath());
>byte buffer[] = new byte[256];
>int bytesRead = 0;
>while( (bytesRead = in.read(buffer)) > 0) {
>out.write(buffer, 0, bytesRead);
>}
>in.close();
>}
>out.close();
>} catch (IOException e) {
>e.printStackTrace();
>}
>}
>}
>------
>
>
>
Re: Create HDFS directory fails
Posted by R J <rj...@yahoo.com>.
Thank you.
I tried all the following but none works:
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2/"));
FSDataOutputStream out = hdfs.create(new Path("/user/logger/dev2"));
Path hdfsFile = new Path("/user/logger/dev2/one.dat");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2");
FSDataOutputStream out = hdfs.create(hdfsFile);
Path hdfsFile = new Path("/user/logger/dev2/");
FSDataOutputStream out = hdfs.create(hdfsFile);
On Tuesday, July 29, 2014 1:57 AM, Wellington Chevreuil <we...@gmail.com> wrote:
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
Hi All,
>
>
>I am trying to programmatically create a directory in HDFS but it fails with error.
>
>
>This the part of my code:
>Path hdfsFile = new Path("/user/logger/dev2/tmp2");
>try {
>FSDataOutputStream out = hdfs.create(hdfsFile);
>}
>
>And I get this error:
>java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
>
>I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
>$hadoop fs -mkdir /user/logger/dev/tmp2
>$hadoop fs -rmr /user/logger/dev/tmp2
>(above works)
>
>
>Here is my entire code:
>------PutMerge.java------
>import java.io.IOException;
>import org.apache.hadoop.conf.Configuration;
>import
org.apache.hadoop.fs.FSDataInputStream;
>import org.apache.hadoop.fs.FSDataOutputStream;
>import org.apache.hadoop.fs.FileStatus;
>import org.apache.hadoop.fs.FileSystem;
>import org.apache.hadoop.fs.Path;
>public class PutMerge {
>
>public static void main(String[] args) throws IOException {
>Configuration conf = new Configuration();
>FileSystem hdfs = FileSystem.get(conf);
>FileSystem local = FileSystem.getLocal(conf);
>
>Path inputDir = new Path("/home/tmp/test");
>Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
>try {
>FileStatus[] inputFiles = local.listStatus(inputDir);
>FSDataOutputStream out =
hdfs.create(hdfsFile);
>
>for (int i=0; i<inputFiles.length; i++) {
>System.out.println(inputFiles[i].getPath().getName());
>FSDataInputStream in = local.open(inputFiles[i].getPath());
>byte buffer[] = new byte[256];
>int bytesRead = 0;
>while( (bytesRead = in.read(buffer)) > 0) {
>out.write(buffer, 0, bytesRead);
>}
>in.close();
>}
>out.close();
>} catch (IOException e) {
>e.printStackTrace();
>}
>}
>}
>------
>
>
>
Re: Create HDFS directory fails
Posted by Wellington Chevreuil <we...@gmail.com>.
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
Re: Create HDFS directory fails
Posted by Wellington Chevreuil <we...@gmail.com>.
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
Re: Create HDFS directory fails
Posted by unmesha sreeveni <un...@gmail.com>.
On Tue, Jul 29, 2014 at 1:13 PM, R J <rj...@yahoo.com> wrote:
> java.io.IOException: Mkdirs failed to create
Check
if
you have permissions to mkdir this directory (try it from the command line)
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Re: Create HDFS directory fails
Posted by unmesha sreeveni <un...@gmail.com>.
On Tue, Jul 29, 2014 at 1:13 PM, R J <rj...@yahoo.com> wrote:
> java.io.IOException: Mkdirs failed to create
Check
if
you have permissions to mkdir this directory (try it from the command line)
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Re: Create HDFS directory fails
Posted by Wellington Chevreuil <we...@gmail.com>.
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
Re: Create HDFS directory fails
Posted by unmesha sreeveni <un...@gmail.com>.
On Tue, Jul 29, 2014 at 1:13 PM, R J <rj...@yahoo.com> wrote:
> java.io.IOException: Mkdirs failed to create
Check
if
you have permissions to mkdir this directory (try it from the command line)
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Re: Create HDFS directory fails
Posted by Wellington Chevreuil <we...@gmail.com>.
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is "/user/logger" and you want to create "/user/logger/dev2/tmp2", you have to first do hdfs.create(new Path("/user/logger/dev2")), then hdfs.create(new Path("/user/logger/dev2/tmp2")). Have you already tried that?
On 29 Jul 2014, at 08:43, R J <rj...@yahoo.com> wrote:
> Hi All,
>
> I am trying to programmatically create a directory in HDFS but it fails with error.
>
> This the part of my code:
> Path hdfsFile = new Path("/user/logger/dev2/tmp2");
> try {
> FSDataOutputStream out = hdfs.create(hdfsFile);
> }
> And I get this error:
> java.io.IOException: Mkdirs failed to create /user/logger/dev2/tmp2
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:379)
> at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:365)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
> at PutMerge.main(PutMerge.java:20)
>
> I can create the same HDFS directory (and then remove) via hadoop command as the same user who is running the java executable:
> $hadoop fs -mkdir /user/logger/dev/tmp2
> $hadoop fs -rmr /user/logger/dev/tmp2
> (above works)
>
> Here is my entire code:
> ------PutMerge.java------
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataInputStream;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.FileStatus;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
> public class PutMerge {
>
> public static void main(String[] args) throws IOException {
> Configuration conf = new Configuration();
> FileSystem hdfs = FileSystem.get(conf);
> FileSystem local = FileSystem.getLocal(conf);
>
> Path inputDir = new Path("/home/tmp/test");
> Path hdfsFile = new Path("/user/logger/dev/tmp2");
>
> try {
> FileStatus[] inputFiles = local.listStatus(inputDir);
> FSDataOutputStream out = hdfs.create(hdfsFile);
>
> for (int i=0; i<inputFiles.length; i++) {
> System.out.println(inputFiles[i].getPath().getName());
> FSDataInputStream in = local.open(inputFiles[i].getPath());
> byte buffer[] = new byte[256];
> int bytesRead = 0;
> while( (bytesRead = in.read(buffer)) > 0) {
> out.write(buffer, 0, bytesRead);
> }
> in.close();
> }
> out.close();
> } catch (IOException e) {
> e.printStackTrace();
> }
> }
> }
> ------
>
Re: Create HDFS directory fails
Posted by unmesha sreeveni <un...@gmail.com>.
On Tue, Jul 29, 2014 at 1:13 PM, R J <rj...@yahoo.com> wrote:
> java.io.IOException: Mkdirs failed to create
Check
if
you have permissions to mkdir this directory (try it from the command line)
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/