You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2019/09/18 00:26:14 UTC

[GitHub] [incubator-iceberg] ZhijieWang commented on issue #464: add optional stringType column after a new long type column , when write data get an exception

ZhijieWang commented on issue #464: add optional  stringType column after a new long type column ,when write data get an  exception
URL: https://github.com/apache/incubator-iceberg/issues/464#issuecomment-532465475
 
 
   Wrote a basic test case for this on my local machine. Pulled the latest master branch. It failed immediately after the first time adding the column then tried to write. Not sure how yours get past the first column change.
   
   ```
   package org.apache.iceberg.spark.source;
   
   public  class Record{
       private Integer id;
       Record(Integer id){
           this.id = id;
       }
       public Integer getId() {
           return id;
       }
   
       public void setId(Integer id) {
           this.id = id;
       }
   }
   
   @Test
     public void testAddOptionalColumnToStrucField() throws IOException {
   
       String table_name = "iceberg_partition_test_120";
       Schema schema = new Schema(
               Types.NestedField.optional(0, "id", Types.IntegerType.get())
       );
       TableIdentifier tableIdentifier = TableIdentifier.of("default", table_name);
       PartitionSpec spec = PartitionSpec.builderFor(schema).identity("id").build();
       Table table = catalog.createTable(tableIdentifier, schema, spec);
       Dataset<Row> df =  spark.createDataFrame( Lists.newArrayList(new Record(1)), Record.class);
       df.write().format("iceberg").mode("append").save(tableIdentifier.toString());
       spark.read().format("iceberg").load(tableIdentifier.toString()).show();
   
       table.updateSchema().addColumn("phone number", Types.LongType.get()).commit();
   
       df =  spark.createDataFrame( Lists.newArrayList(new Record(1)), Record.class);
       df.write().format("iceberg").mode("append").save(tableIdentifier.toString());
       spark.read().format("iceberg").load(tableIdentifier.toString()).show();
   
       table.updateSchema().addColumn("name", Types.StringType.get()).commit();
       df =  spark.createDataFrame( Lists.newArrayList(new Record(1)), Record.class);
       df.write().format("iceberg").mode("append").save(tableIdentifier.toString());
   
     }
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org