You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avro.apache.org by "Harsh J Chouraria (JIRA)" <ji...@apache.org> on 2010/06/20 18:07:23 UTC

[jira] Commented: (AVRO-534) AvroRecordReader (org.apache.avro.mapred) should support a JobConf-given schema

    [ https://issues.apache.org/jira/browse/AVRO-534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880646#action_12880646 ] 

Harsh J Chouraria commented on AVRO-534:
----------------------------------------

Hello again,

I'm unable to understand a (probably minor) java thing with regard to the JUnit tests. Where is the WordCount class actually defined? I find the .java source file of it in my build folder (under build/test/generated/etc).

Another quick question, should the testProjection method also perform a map-reduce operation or simple data reading would do?

> AvroRecordReader (org.apache.avro.mapred) should support a JobConf-given schema
> -------------------------------------------------------------------------------
>
>                 Key: AVRO-534
>                 URL: https://issues.apache.org/jira/browse/AVRO-534
>             Project: Avro
>          Issue Type: New Feature
>          Components: java
>    Affects Versions: 1.4.0
>         Environment: ArchLinux, JAVA 1.6, Apache Hadoop (0.20.2), Apache Avro (trunk -- 1.4.0 SNAPSHOT), Using Avro Generic API (JAVA)
>            Reporter: Harsh J Chouraria
>            Priority: Trivial
>             Fix For: 1.4.0
>
>         Attachments: avro.mapreduce.r1.diff
>
>
> Consider an Avro File of a single record type with about 70 fields in the order (str, str, str, long, str, double, [lets take only first 6 into consideration] ...).
> To pass this into a simple MapReduce job I do: AvroInputFormat.addInputPath(...) and it works well with an IdentityMapper.
> Now I'd like to read only three fields, say fields 0, 1 and 3 so I give the special schema with my 3 fields as (str (0), str (1), long(2)) using AvroJob.setInputGeneric(..., mySchema). This leads to a failure of the mapreduce job since the Avro record reader reads the file for its entire schema (of 70 fields) and tries to convert my given 'long' field to 'str' as is at the index 2 of the actual schema (meaning its using the actual schema embedded into the file, not what I supplied!).
> The AvroRecordReader must support reading in the schema specified by the user using AvroJob.setInputGeneric.
> I've written a patch for it to do the same but am not sure if its actually the solution (MAP_OUTPUT_SCHEMA use?)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.