You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org> on 2011/03/02 23:57:37 UTC
[jira] Resolved: (MAPREDUCE-1712) HAR sequence files throw errors
in MR jobs
[ https://issues.apache.org/jira/browse/MAPREDUCE-1712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tsz Wo (Nicholas), SZE resolved MAPREDUCE-1712.
-----------------------------------------------
Resolution: Duplicate
Seems that MAPREDUCE-1752 fixed this. Please feel free to reopen this if it is still a problem.
> HAR sequence files throw errors in MR jobs
> ------------------------------------------
>
> Key: MAPREDUCE-1712
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1712
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: harchive
> Affects Versions: 0.20.1
> Reporter: Paul Yang
> Assignee: Mahadev konar
>
> When a HAR is specified as the input for a map reduce job and the file format is sequence file, an error similar to the following is thrown (this one is from Hive).
> {code}
> java.lang.IllegalArgumentException: Offset 0 is outside of file (0..-1)
> at org.apache.hadoop.mapred.FileInputFormat.getBlockIndex(FileInputFormat.java:299)
> at org.apache.hadoop.mapred.FileInputFormat.getSplitHosts(FileInputFormat.java:455)
> at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:260)
> at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:261)
> at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:827)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:798)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:747)
> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:663)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:631)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:303)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> {code}
> This is caused by the dummy block location returned by HarFileSystem.getFileBlockLocations().
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira