You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/11/03 13:45:28 UTC

[GitHub] [spark] bersprockets commented on pull request #30221: [SPARK-33314][SQL] Avoid dropping rows in Avro reader

bersprockets commented on pull request #30221:
URL: https://github.com/apache/spark/pull/30221#issuecomment-720603453


   > * After #29145, there is always deserialization if there is a next row: https://github.com/apache/spark/pull/29145/files#diff-22181c0e0050f9694efac388063535cf77e92a82dd962fec3f8507dfae45e52cR185
   > 
   > I am sorry but shall we consider reverting #29145? CC @MaxGekk @cloud-fan
   
   @gengliangwang I noted only a single extra call to hasNextRow per task, so the issue was not performance but dropped records (I suppose there could be some scenario I don't know about where hasNextRow is called many extra times).
   
   Anyway, both the fix I proposed and the suggested improvements to my proposed fix would alleviate that concern, since deserialization would be called only once per Avro record (regardless of how many times hasNextRow is called).
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org