You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by "Shrestha, Tejen [USA]" <Sh...@bah.com> on 2012/07/17 21:07:27 UTC

Re: [External] Re: Problem importing directory to Accumulo table

This is the error that was produced.

java.io.FileNotFoundException: File /tmp/files does not exist.
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.ja
va:361)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:2
45)
at 
org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.
java:509)
at 
org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.ja
va:644)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at com.bah.applefox.plugins.loader.NGramLoader.run(NGramLoader.java:302)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at com.bah.applefox.ingest.Ingest.main(Ingest.java:133)



On 7/17/12 12:50 PM, "Eric Newton" <er...@gmail.com> wrote:

>You will need to look in the master/tserver logs for the reason.
>
>-Eric
>
>On Tue, Jul 17, 2012 at 11:03 AM, Shrestha, Tejen [USA]
><Sh...@bah.com> wrote:
>> Below is the line I am using to do the Bulk Import:
>>
>>
>> conn.tableOperations().importDirectory(table, dir, failureDir, false);
>>
>>
>> Where conn is the connector to the ZooKeeper instance.  The problem is
>>the
>> error: "Internal error processing waitForTableOperation."


Re: [External] Re: Problem importing directory to Accumulo table

Posted by William Slacum <ws...@gmail.com>.
Also it looks like your app is storing something in /tmp/files, so you
may want to make sure that you mean to be looking on your local FS or
in HDFS.

On Tue, Jul 17, 2012 at 12:27 PM, William Slacum <ws...@gmail.com> wrote:
> Did you configure hadoop to store your HDFS instance/data somewhere
> other than /tmp? Look up the single node set up in the Hadoop docs.
>
> On Tue, Jul 17, 2012 at 12:07 PM, Shrestha, Tejen [USA]
> <Sh...@bah.com> wrote:
>> This is the error that was produced.
>>
>> java.io.FileNotFoundException: File /tmp/files does not exist.
>> at
>> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.ja
>> va:361)
>> at
>> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:2
>> 45)
>> at
>> org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.
>> java:509)
>> at
>> org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.ja
>> va:644)
>> at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
>> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
>> at com.bah.applefox.plugins.loader.NGramLoader.run(NGramLoader.java:302)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> at com.bah.applefox.ingest.Ingest.main(Ingest.java:133)
>>
>>
>>
>> On 7/17/12 12:50 PM, "Eric Newton" <er...@gmail.com> wrote:
>>
>>>You will need to look in the master/tserver logs for the reason.
>>>
>>>-Eric
>>>
>>>On Tue, Jul 17, 2012 at 11:03 AM, Shrestha, Tejen [USA]
>>><Sh...@bah.com> wrote:
>>>> Below is the line I am using to do the Bulk Import:
>>>>
>>>>
>>>> conn.tableOperations().importDirectory(table, dir, failureDir, false);
>>>>
>>>>
>>>> Where conn is the connector to the ZooKeeper instance.  The problem is
>>>>the
>>>> error: "Internal error processing waitForTableOperation."
>>

Re: [External] Re: Problem importing directory to Accumulo table

Posted by William Slacum <ws...@gmail.com>.
Did you configure hadoop to store your HDFS instance/data somewhere
other than /tmp? Look up the single node set up in the Hadoop docs.

On Tue, Jul 17, 2012 at 12:07 PM, Shrestha, Tejen [USA]
<Sh...@bah.com> wrote:
> This is the error that was produced.
>
> java.io.FileNotFoundException: File /tmp/files does not exist.
> at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.ja
> va:361)
> at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:2
> 45)
> at
> org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.
> java:509)
> at
> org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.ja
> va:644)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
> at com.bah.applefox.plugins.loader.NGramLoader.run(NGramLoader.java:302)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at com.bah.applefox.ingest.Ingest.main(Ingest.java:133)
>
>
>
> On 7/17/12 12:50 PM, "Eric Newton" <er...@gmail.com> wrote:
>
>>You will need to look in the master/tserver logs for the reason.
>>
>>-Eric
>>
>>On Tue, Jul 17, 2012 at 11:03 AM, Shrestha, Tejen [USA]
>><Sh...@bah.com> wrote:
>>> Below is the line I am using to do the Bulk Import:
>>>
>>>
>>> conn.tableOperations().importDirectory(table, dir, failureDir, false);
>>>
>>>
>>> Where conn is the connector to the ZooKeeper instance.  The problem is
>>>the
>>> error: "Internal error processing waitForTableOperation."
>