You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by GitBox <gi...@apache.org> on 2020/07/27 11:32:16 UTC

[GitHub] [parquet-mr] gszadovszky commented on a change in pull request #804: PARQUET-1887: Allow writing after an exception

gszadovszky commented on a change in pull request #804:
URL: https://github.com/apache/parquet-mr/pull/804#discussion_r460821213



##########
File path: parquet-avro/src/test/java/org/apache/parquet/avro/TestArrayCompatibility.java
##########
@@ -56,6 +59,30 @@ public static void setupNewBehaviorConfiguration() {
         AvroSchemaConverter.ADD_LIST_ELEMENT_RECORDS, false);
   }
 
+  @Test
+  public void testReadEmptyParquetFileWriteNull() throws IOException {
+    final Schema schema;
+    try (InputStream avroSchema = Resources.getResource("persons.json").openStream()) {
+      schema = new Schema.Parser().parse(avroSchema);
+    }
+
+    try (ParquetWriter<GenericRecord> writer =
+           AvroParquetWriter.<GenericRecord>builder(new org.apache.hadoop.fs.Path("/tmp/persons.parquet"))
+             .withSchema(schema)
+             .build()) {
+
+      // To trigger exception, add array with null element.
+      try {
+        writer.write(new GenericRecordBuilder(schema).set("address", Arrays.asList("first", null, "last")).build());
+      } catch (NullPointerException e) {

Review comment:
       I would add a `fail` in the `try` block after write so we fail if no exception occurred.

##########
File path: parquet-avro/src/test/java/org/apache/parquet/avro/TestArrayCompatibility.java
##########
@@ -56,6 +59,30 @@ public static void setupNewBehaviorConfiguration() {
         AvroSchemaConverter.ADD_LIST_ELEMENT_RECORDS, false);
   }
 
+  @Test
+  public void testReadEmptyParquetFileWriteNull() throws IOException {
+    final Schema schema;
+    try (InputStream avroSchema = Resources.getResource("persons.json").openStream()) {
+      schema = new Schema.Parser().parse(avroSchema);
+    }
+
+    try (ParquetWriter<GenericRecord> writer =
+           AvroParquetWriter.<GenericRecord>builder(new org.apache.hadoop.fs.Path("/tmp/persons.parquet"))
+             .withSchema(schema)
+             .build()) {
+
+      // To trigger exception, add array with null element.
+      try {
+        writer.write(new GenericRecordBuilder(schema).set("address", Arrays.asList("first", null, "last")).build());
+      } catch (NullPointerException e) {
+        // We expect this one to fail
+      }
+
+      // At this point all future calls to writer.write will fail

Review comment:
       Not sure about this comment. I've thought that's the point of this fix to not to fail after the previous exception.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org