You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/04/30 12:56:34 UTC

[GitHub] [flink] wuchong opened a new pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

wuchong opened a new pull request #11962:
URL: https://github.com/apache/flink/pull/11962


   
   
   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where *FLINK-XXXX* should be replaced by the actual issue number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following this pattern: `[hotfix] [docs] Fix typo in event time introduction` or `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean verify` passes. You can set up Travis CI to do that following [this guide](https://flink.apache.org/contributing/contribute-code.html#open-a-pull-request).
   
     - Each pull request should address only one issue, not mix up code from multiple issues.
     
     - Each commit in the pull request has a meaningful commit message (including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   Add support CsvRowDataDeserializationSchema and CsvRowDataSerializationSchema for the new data structure RowData. This will be used by new TableSource and TableSink connectors.
   
   The implemented CSV schema feature aligns with the exsiting ones. 
   
   ## Brief change log
   
   - `CsvRowDataDeserializationSchema` for deserialize csv `byte[]` into `RowData`.
   - `CsvRowDataSerializationSchema` for serialize `RowData` into csv `byte[]`.
   
   
   ## Verifying this change
   
   - Ported tests from `CsvRowDeSerializationSchemaTest` to `CsvRowDataSerDeSchemaTest`.
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): yes
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): yes
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
     - The S3 file system connector: no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? yes
     - If yes, how is the feature documented? not applicable
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-621825270


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487",
       "triggerID" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 98481e8219841500289cabd8d3e7b1b0787207ea Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r418124706



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance
+			return jsonNode.asBoolean();
+		} else {
+			return Boolean.parseBoolean(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToInt(JsonNode jsonNode) {
+		if (jsonNode.canConvertToInt()) {
+			// avoid redundant toString and parseInt, for better performance
+			return jsonNode.asInt();
+		} else {
+			return Integer.parseInt(jsonNode.asText().trim());
+		}
+	}
+
+	private long convertToLong(JsonNode jsonNode) {
+		if (jsonNode.canConvertToLong()) {
+			// avoid redundant toString and parseLong, for better performance
+			return jsonNode.asLong();
+		} else {
+			return Long.parseLong(jsonNode.asText().trim());
+		}
+	}
+
+	private double convertToDouble(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return jsonNode.asDouble();
+		} else {
+			return Double.parseDouble(jsonNode.asText().trim());
+		}
+	}
+
+	private float convertToFloat(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return (float) jsonNode.asDouble();
+		} else {
+			return Float.parseFloat(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToDate(JsonNode jsonNode) {
+		// csv currently is using Date.valueOf() to parse date string
+		return (int) Date.valueOf(jsonNode.asText()).toLocalDate().toEpochDay();
+	}
+
+	private int convertToTime(JsonNode jsonNode) {
+		// csv currently is using Time.valueOf() to parse time string
+		LocalTime localTime = Time.valueOf(jsonNode.asText()).toLocalTime();
+		// get number of milliseconds of the day
+		return localTime.toSecondOfDay() * 1000;

Review comment:
       This is also aligned with previous behavior. `Time.valueOf()` doesn't support to parse milli/nano seconds. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r418128626



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataSerializationSchema.java
##########
@@ -0,0 +1,377 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.serialization.SerializationSchema;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectWriter;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ContainerNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.Serializable;
+import java.math.BigDecimal;
+import java.time.LocalDate;
+import java.time.LocalTime;
+import java.time.format.DateTimeFormatter;
+import java.time.format.DateTimeFormatterBuilder;
+import java.util.Arrays;
+import java.util.Objects;
+
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_DATE;
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_TIME;
+
+/**
+ * Serialization schema that serializes an object of Flink Table & SQL internal data structure
+ * into a CSV bytes.
+ *
+ * <p>Serializes the input row into a {@link JsonNode} and
+ * converts it into <code>byte[]</code>.
+ *
+ * <p>Result <code>byte[]</code> messages can be deserialized using {@link CsvRowDataDeserializationSchema}.
+ */
+@PublicEvolving
+public final class CsvRowDataSerializationSchema implements SerializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Logical row type describing the input CSV data. */
+	private final RowType rowType;
+
+	/** Runtime instance that performs the actual work. */
+	private final SerializationRuntimeConverter runtimeConverter;
+
+	/** CsvMapper used to write {@link JsonNode} into bytes. */
+	private final CsvMapper csvMapper;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object writer used to write rows. It is configured by {@link CsvSchema}. */
+	private final ObjectWriter objectWriter;
+
+	/** Reusable object node. */
+	private transient ObjectNode root;
+
+	private CsvRowDataSerializationSchema(
+			RowType rowType,
+			CsvSchema csvSchema) {
+		this.rowType = rowType;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvMapper = new CsvMapper();
+		this.csvSchema = csvSchema;
+		this.objectWriter = csvMapper.writer(csvSchema);
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataSerializationSchema}.
+	 */
+	@PublicEvolving
+	public static class Builder {
+
+		private final RowType rowType;
+		private CsvSchema csvSchema;
+
+		/**
+		 * Creates a {@link CsvRowDataSerializationSchema} expecting the given {@link RowType}.
+		 *
+		 * @param rowType logical row type used to create schema.
+		 */
+		public Builder(RowType rowType) {
+			Preconditions.checkNotNull(rowType, "Row type must not be null.");
+
+			this.rowType = rowType;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(c).build();
+			return this;
+		}
+
+		public Builder setLineDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			if (!delimiter.equals("\n") && !delimiter.equals("\r") && !delimiter.equals("\r\n") && !delimiter.equals("")) {
+				throw new IllegalArgumentException(
+					"Unsupported new line delimiter. Only \\n, \\r, \\r\\n, or empty string are supported.");
+			}
+			this.csvSchema = this.csvSchema.rebuild().setLineSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder disableQuoteCharacter() {
+			this.csvSchema = this.csvSchema.rebuild().disableQuoteChar().build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String s) {
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(s).build();
+			return this;
+		}
+
+		public CsvRowDataSerializationSchema build() {
+			return new CsvRowDataSerializationSchema(
+				rowType,
+				csvSchema);
+		}
+	}
+
+	@Override
+	public byte[] serialize(RowData row) {
+		if (root == null) {
+			root = csvMapper.createObjectNode();
+		}
+		try {
+			runtimeConverter.convert(csvMapper, root, row);
+			return objectWriter.writeValueAsBytes(root);
+		} catch (Throwable t) {
+			throw new RuntimeException("Could not serialize row '" + row + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		if (this == o) {
+			return true;
+		}
+		final CsvRowDataSerializationSchema that = (CsvRowDataSerializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return rowType.equals(that.rowType) &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			Arrays.equals(csvSchema.getLineSeparator(), otherSchema.getLineSeparator()) &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			rowType,
+			csvSchema.getColumnSeparator(),
+			csvSchema.getLineSeparator(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// --------------------------------------------------------------------------------
+	// Runtime Converters
+	// --------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts objects of Flink Table & SQL internal data structures
+	 * to corresponding {@link JsonNode}s.
+	 */
+	private interface SerializationRuntimeConverter extends Serializable {

Review comment:
       Or do you mean accessing fields of RowData using getters instead of `RowData.get` utiltiy? 
   I thought about this, but due to we don't have `TypedGetters` interface for `RowData` and `ArrayData`, we have to implement two writers for each type. You can have a look at `org.apache.flink.table.runtime.arrow.writers.IntWriter`. This will be much more complex and we should come up with an idea how to reduce duplicate code. So I think that can be a future work. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r418125766



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {

Review comment:
       Sorry, I don't such a long line code is better for readability and performance. And it's not possible to add inline comment. 

##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {

Review comment:
       Sorry, I don't think such a long line code is better for readability and performance. And it's not possible to add inline comment. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-621825270


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487",
       "triggerID" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "676245d476a27b37ecb0701333ff5e410a5f44c0",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=631",
       "triggerID" : "676245d476a27b37ecb0701333ff5e410a5f44c0",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 676245d476a27b37ecb0701333ff5e410a5f44c0 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=631) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r420067934



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance

Review comment:
       The problem of `asBoolean`, `asLong` is that they eat the parse exception and fall back to the default value (0 for Long, and false for Boolean). But we want to keep the parse excepton to let users be aware of it, and can use `ignore-parse-error` option to turn it off. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-624096189


   Hi @JingsongLi , thanks for the reviewing. I have updated the PR to use getters intead of `RowData#get` utility. JSON format will be updated in a separate issue FLINK-17528.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-621819798


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 98481e8219841500289cabd8d3e7b1b0787207ea (Thu Apr 30 13:01:38 UTC 2020)
   
   **Warnings:**
    * **1 pom.xml files were touched**: Check for build and licensing issues.
    * No documentation files were touched! Remember to keep the Flink docs up to date!
   
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-621825270


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487",
       "triggerID" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 98481e8219841500289cabd8d3e7b1b0787207ea Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-621825270


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487",
       "triggerID" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "676245d476a27b37ecb0701333ff5e410a5f44c0",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "676245d476a27b37ecb0701333ff5e410a5f44c0",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 98481e8219841500289cabd8d3e7b1b0787207ea Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487) 
   * 676245d476a27b37ecb0701333ff5e410a5f44c0 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-621825270


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487",
       "triggerID" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "676245d476a27b37ecb0701333ff5e410a5f44c0",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=631",
       "triggerID" : "676245d476a27b37ecb0701333ff5e410a5f44c0",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 98481e8219841500289cabd8d3e7b1b0787207ea Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=487) 
   * 676245d476a27b37ecb0701333ff5e410a5f44c0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=631) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingsongLi commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
JingsongLi commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r419925440



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance

Review comment:
       `TextNode` also has a correct implementation for `asBoolean`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r418124694



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataSerializationSchema.java
##########
@@ -0,0 +1,377 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.serialization.SerializationSchema;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectWriter;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ContainerNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.Serializable;
+import java.math.BigDecimal;
+import java.time.LocalDate;
+import java.time.LocalTime;
+import java.time.format.DateTimeFormatter;
+import java.time.format.DateTimeFormatterBuilder;
+import java.util.Arrays;
+import java.util.Objects;
+
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_DATE;
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_TIME;
+
+/**
+ * Serialization schema that serializes an object of Flink Table & SQL internal data structure
+ * into a CSV bytes.
+ *
+ * <p>Serializes the input row into a {@link JsonNode} and
+ * converts it into <code>byte[]</code>.
+ *
+ * <p>Result <code>byte[]</code> messages can be deserialized using {@link CsvRowDataDeserializationSchema}.
+ */
+@PublicEvolving
+public final class CsvRowDataSerializationSchema implements SerializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Logical row type describing the input CSV data. */
+	private final RowType rowType;
+
+	/** Runtime instance that performs the actual work. */
+	private final SerializationRuntimeConverter runtimeConverter;
+
+	/** CsvMapper used to write {@link JsonNode} into bytes. */
+	private final CsvMapper csvMapper;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object writer used to write rows. It is configured by {@link CsvSchema}. */
+	private final ObjectWriter objectWriter;
+
+	/** Reusable object node. */
+	private transient ObjectNode root;
+
+	private CsvRowDataSerializationSchema(
+			RowType rowType,
+			CsvSchema csvSchema) {
+		this.rowType = rowType;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvMapper = new CsvMapper();
+		this.csvSchema = csvSchema;
+		this.objectWriter = csvMapper.writer(csvSchema);
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataSerializationSchema}.
+	 */
+	@PublicEvolving
+	public static class Builder {
+
+		private final RowType rowType;
+		private CsvSchema csvSchema;
+
+		/**
+		 * Creates a {@link CsvRowDataSerializationSchema} expecting the given {@link RowType}.
+		 *
+		 * @param rowType logical row type used to create schema.
+		 */
+		public Builder(RowType rowType) {
+			Preconditions.checkNotNull(rowType, "Row type must not be null.");
+
+			this.rowType = rowType;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(c).build();
+			return this;
+		}
+
+		public Builder setLineDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			if (!delimiter.equals("\n") && !delimiter.equals("\r") && !delimiter.equals("\r\n") && !delimiter.equals("")) {
+				throw new IllegalArgumentException(
+					"Unsupported new line delimiter. Only \\n, \\r, \\r\\n, or empty string are supported.");
+			}
+			this.csvSchema = this.csvSchema.rebuild().setLineSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder disableQuoteCharacter() {
+			this.csvSchema = this.csvSchema.rebuild().disableQuoteChar().build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String s) {
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(s).build();
+			return this;
+		}
+
+		public CsvRowDataSerializationSchema build() {
+			return new CsvRowDataSerializationSchema(
+				rowType,
+				csvSchema);
+		}
+	}
+
+	@Override
+	public byte[] serialize(RowData row) {
+		if (root == null) {
+			root = csvMapper.createObjectNode();
+		}
+		try {
+			runtimeConverter.convert(csvMapper, root, row);
+			return objectWriter.writeValueAsBytes(root);
+		} catch (Throwable t) {
+			throw new RuntimeException("Could not serialize row '" + row + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		if (this == o) {
+			return true;
+		}
+		final CsvRowDataSerializationSchema that = (CsvRowDataSerializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return rowType.equals(that.rowType) &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			Arrays.equals(csvSchema.getLineSeparator(), otherSchema.getLineSeparator()) &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			rowType,
+			csvSchema.getColumnSeparator(),
+			csvSchema.getLineSeparator(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// --------------------------------------------------------------------------------
+	// Runtime Converters
+	// --------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts objects of Flink Table & SQL internal data structures
+	 * to corresponding {@link JsonNode}s.
+	 */
+	private interface SerializationRuntimeConverter extends Serializable {

Review comment:
       Do you mean the naming? I just want to follow the current naming style in csv and json formats.

##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance
+			return jsonNode.asBoolean();
+		} else {
+			return Boolean.parseBoolean(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToInt(JsonNode jsonNode) {
+		if (jsonNode.canConvertToInt()) {
+			// avoid redundant toString and parseInt, for better performance
+			return jsonNode.asInt();
+		} else {
+			return Integer.parseInt(jsonNode.asText().trim());
+		}
+	}
+
+	private long convertToLong(JsonNode jsonNode) {
+		if (jsonNode.canConvertToLong()) {
+			// avoid redundant toString and parseLong, for better performance
+			return jsonNode.asLong();
+		} else {
+			return Long.parseLong(jsonNode.asText().trim());
+		}
+	}
+
+	private double convertToDouble(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return jsonNode.asDouble();
+		} else {
+			return Double.parseDouble(jsonNode.asText().trim());
+		}
+	}
+
+	private float convertToFloat(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return (float) jsonNode.asDouble();
+		} else {
+			return Float.parseFloat(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToDate(JsonNode jsonNode) {
+		// csv currently is using Date.valueOf() to parse date string
+		return (int) Date.valueOf(jsonNode.asText()).toLocalDate().toEpochDay();
+	}
+
+	private int convertToTime(JsonNode jsonNode) {
+		// csv currently is using Time.valueOf() to parse time string
+		LocalTime localTime = Time.valueOf(jsonNode.asText()).toLocalTime();
+		// get number of milliseconds of the day
+		return localTime.toSecondOfDay() * 1000;

Review comment:
       This is also align with previous behavior. `Time.valueOf()` doesn't support to parse milli/nano seconds. 

##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance

Review comment:
       We want to support this case `{"f0": "true"}` where the value is a string node, not boolean node. This is also how `CsvRowDeserializationSchema` process boolean and numeric types. I just add a fast path (avoid toString and parse if it's a boolean node).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingsongLi commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
JingsongLi commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r419927471



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataSerializationSchema.java
##########
@@ -0,0 +1,377 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.serialization.SerializationSchema;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectWriter;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ContainerNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.Serializable;
+import java.math.BigDecimal;
+import java.time.LocalDate;
+import java.time.LocalTime;
+import java.time.format.DateTimeFormatter;
+import java.time.format.DateTimeFormatterBuilder;
+import java.util.Arrays;
+import java.util.Objects;
+
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_DATE;
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_TIME;
+
+/**
+ * Serialization schema that serializes an object of Flink Table & SQL internal data structure
+ * into a CSV bytes.
+ *
+ * <p>Serializes the input row into a {@link JsonNode} and
+ * converts it into <code>byte[]</code>.
+ *
+ * <p>Result <code>byte[]</code> messages can be deserialized using {@link CsvRowDataDeserializationSchema}.
+ */
+@PublicEvolving
+public final class CsvRowDataSerializationSchema implements SerializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Logical row type describing the input CSV data. */
+	private final RowType rowType;
+
+	/** Runtime instance that performs the actual work. */
+	private final SerializationRuntimeConverter runtimeConverter;
+
+	/** CsvMapper used to write {@link JsonNode} into bytes. */
+	private final CsvMapper csvMapper;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object writer used to write rows. It is configured by {@link CsvSchema}. */
+	private final ObjectWriter objectWriter;
+
+	/** Reusable object node. */
+	private transient ObjectNode root;
+
+	private CsvRowDataSerializationSchema(
+			RowType rowType,
+			CsvSchema csvSchema) {
+		this.rowType = rowType;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvMapper = new CsvMapper();
+		this.csvSchema = csvSchema;
+		this.objectWriter = csvMapper.writer(csvSchema);
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataSerializationSchema}.
+	 */
+	@PublicEvolving
+	public static class Builder {
+
+		private final RowType rowType;
+		private CsvSchema csvSchema;
+
+		/**
+		 * Creates a {@link CsvRowDataSerializationSchema} expecting the given {@link RowType}.
+		 *
+		 * @param rowType logical row type used to create schema.
+		 */
+		public Builder(RowType rowType) {
+			Preconditions.checkNotNull(rowType, "Row type must not be null.");
+
+			this.rowType = rowType;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(c).build();
+			return this;
+		}
+
+		public Builder setLineDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			if (!delimiter.equals("\n") && !delimiter.equals("\r") && !delimiter.equals("\r\n") && !delimiter.equals("")) {
+				throw new IllegalArgumentException(
+					"Unsupported new line delimiter. Only \\n, \\r, \\r\\n, or empty string are supported.");
+			}
+			this.csvSchema = this.csvSchema.rebuild().setLineSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder disableQuoteCharacter() {
+			this.csvSchema = this.csvSchema.rebuild().disableQuoteChar().build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String s) {
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(s).build();
+			return this;
+		}
+
+		public CsvRowDataSerializationSchema build() {
+			return new CsvRowDataSerializationSchema(
+				rowType,
+				csvSchema);
+		}
+	}
+
+	@Override
+	public byte[] serialize(RowData row) {
+		if (root == null) {
+			root = csvMapper.createObjectNode();
+		}
+		try {
+			runtimeConverter.convert(csvMapper, root, row);
+			return objectWriter.writeValueAsBytes(root);
+		} catch (Throwable t) {
+			throw new RuntimeException("Could not serialize row '" + row + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		if (this == o) {
+			return true;
+		}
+		final CsvRowDataSerializationSchema that = (CsvRowDataSerializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return rowType.equals(that.rowType) &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			Arrays.equals(csvSchema.getLineSeparator(), otherSchema.getLineSeparator()) &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			rowType,
+			csvSchema.getColumnSeparator(),
+			csvSchema.getLineSeparator(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// --------------------------------------------------------------------------------
+	// Runtime Converters
+	// --------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts objects of Flink Table & SQL internal data structures
+	 * to corresponding {@link JsonNode}s.
+	 */
+	private interface SerializationRuntimeConverter extends Serializable {

Review comment:
       I don't think we should use `RowData.get(RowData row, int pos, LogicalType fieldType)` if we can have a better choice.
   It is not only for the performance of csv, but also an important example for users.
   A strong purpose of `RowData` is to recommend users to use specific `get` instead of inefficient and generic `get`.
   We need to have a better way before the 1.11 release.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-621825270


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "98481e8219841500289cabd8d3e7b1b0787207ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 98481e8219841500289cabd8d3e7b1b0787207ea UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r420070718



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance
+			return jsonNode.asBoolean();
+		} else {
+			return Boolean.parseBoolean(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToInt(JsonNode jsonNode) {
+		if (jsonNode.canConvertToInt()) {
+			// avoid redundant toString and parseInt, for better performance
+			return jsonNode.asInt();
+		} else {
+			return Integer.parseInt(jsonNode.asText().trim());
+		}
+	}
+
+	private long convertToLong(JsonNode jsonNode) {
+		if (jsonNode.canConvertToLong()) {
+			// avoid redundant toString and parseLong, for better performance
+			return jsonNode.asLong();
+		} else {
+			return Long.parseLong(jsonNode.asText().trim());
+		}
+	}
+
+	private double convertToDouble(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return jsonNode.asDouble();
+		} else {
+			return Double.parseDouble(jsonNode.asText().trim());
+		}
+	}
+
+	private float convertToFloat(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return (float) jsonNode.asDouble();
+		} else {
+			return Float.parseFloat(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToDate(JsonNode jsonNode) {
+		// csv currently is using Date.valueOf() to parse date string
+		return (int) Date.valueOf(jsonNode.asText()).toLocalDate().toEpochDay();
+	}
+
+	private int convertToTime(JsonNode jsonNode) {
+		// csv currently is using Time.valueOf() to parse time string
+		LocalTime localTime = Time.valueOf(jsonNode.asText()).toLocalTime();
+		// get number of milliseconds of the day
+		return localTime.toSecondOfDay() * 1000;

Review comment:
       Thanks @JingsongLi , I created FLINK-17525 to track this. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
wuchong commented on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-624425766


   The build is failed because of `UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel` which is tracked by FLINK-17315.
   
   It is passed in my private build: https://dev.azure.com/imjark/Flink/_build/results?buildId=45&view=results
   
   Will merge this. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingsongLi commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
JingsongLi commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r419923975



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance
+			return jsonNode.asBoolean();
+		} else {
+			return Boolean.parseBoolean(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToInt(JsonNode jsonNode) {
+		if (jsonNode.canConvertToInt()) {
+			// avoid redundant toString and parseInt, for better performance
+			return jsonNode.asInt();
+		} else {
+			return Integer.parseInt(jsonNode.asText().trim());
+		}
+	}
+
+	private long convertToLong(JsonNode jsonNode) {
+		if (jsonNode.canConvertToLong()) {
+			// avoid redundant toString and parseLong, for better performance
+			return jsonNode.asLong();
+		} else {
+			return Long.parseLong(jsonNode.asText().trim());
+		}
+	}
+
+	private double convertToDouble(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return jsonNode.asDouble();
+		} else {
+			return Double.parseDouble(jsonNode.asText().trim());
+		}
+	}
+
+	private float convertToFloat(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return (float) jsonNode.asDouble();
+		} else {
+			return Float.parseFloat(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToDate(JsonNode jsonNode) {
+		// csv currently is using Date.valueOf() to parse date string
+		return (int) Date.valueOf(jsonNode.asText()).toLocalDate().toEpochDay();
+	}
+
+	private int convertToTime(JsonNode jsonNode) {
+		// csv currently is using Time.valueOf() to parse time string
+		LocalTime localTime = Time.valueOf(jsonNode.asText()).toLocalTime();
+		// get number of milliseconds of the day
+		return localTime.toSecondOfDay() * 1000;

Review comment:
       Previous behavior is because old planner only support seconds, but blink support mills and more.
   We are using `LocalTime` instead of `Time` now. If you don't want change it you should add a TODO and create a JIRA.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingsongLi commented on a change in pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

Posted by GitBox <gi...@apache.org>.
JingsongLi commented on a change in pull request #11962:
URL: https://github.com/apache/flink/pull/11962#discussion_r417999723



##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {

Review comment:
       Just use `jsonNode.isBoolean() ? jsonNode.asBoolean() : Boolean.parseBoolean(jsonNode.asText().trim())`?
   Same below.

##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance

Review comment:
       Why we can't use `node.asBoolean()` directly?

##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataSerializationSchema.java
##########
@@ -0,0 +1,377 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.serialization.SerializationSchema;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectWriter;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ContainerNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.Serializable;
+import java.math.BigDecimal;
+import java.time.LocalDate;
+import java.time.LocalTime;
+import java.time.format.DateTimeFormatter;
+import java.time.format.DateTimeFormatterBuilder;
+import java.util.Arrays;
+import java.util.Objects;
+
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_DATE;
+import static java.time.format.DateTimeFormatter.ISO_LOCAL_TIME;
+
+/**
+ * Serialization schema that serializes an object of Flink Table & SQL internal data structure
+ * into a CSV bytes.
+ *
+ * <p>Serializes the input row into a {@link JsonNode} and
+ * converts it into <code>byte[]</code>.
+ *
+ * <p>Result <code>byte[]</code> messages can be deserialized using {@link CsvRowDataDeserializationSchema}.
+ */
+@PublicEvolving
+public final class CsvRowDataSerializationSchema implements SerializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Logical row type describing the input CSV data. */
+	private final RowType rowType;
+
+	/** Runtime instance that performs the actual work. */
+	private final SerializationRuntimeConverter runtimeConverter;
+
+	/** CsvMapper used to write {@link JsonNode} into bytes. */
+	private final CsvMapper csvMapper;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object writer used to write rows. It is configured by {@link CsvSchema}. */
+	private final ObjectWriter objectWriter;
+
+	/** Reusable object node. */
+	private transient ObjectNode root;
+
+	private CsvRowDataSerializationSchema(
+			RowType rowType,
+			CsvSchema csvSchema) {
+		this.rowType = rowType;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvMapper = new CsvMapper();
+		this.csvSchema = csvSchema;
+		this.objectWriter = csvMapper.writer(csvSchema);
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataSerializationSchema}.
+	 */
+	@PublicEvolving
+	public static class Builder {
+
+		private final RowType rowType;
+		private CsvSchema csvSchema;
+
+		/**
+		 * Creates a {@link CsvRowDataSerializationSchema} expecting the given {@link RowType}.
+		 *
+		 * @param rowType logical row type used to create schema.
+		 */
+		public Builder(RowType rowType) {
+			Preconditions.checkNotNull(rowType, "Row type must not be null.");
+
+			this.rowType = rowType;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(c).build();
+			return this;
+		}
+
+		public Builder setLineDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			if (!delimiter.equals("\n") && !delimiter.equals("\r") && !delimiter.equals("\r\n") && !delimiter.equals("")) {
+				throw new IllegalArgumentException(
+					"Unsupported new line delimiter. Only \\n, \\r, \\r\\n, or empty string are supported.");
+			}
+			this.csvSchema = this.csvSchema.rebuild().setLineSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder disableQuoteCharacter() {
+			this.csvSchema = this.csvSchema.rebuild().disableQuoteChar().build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String s) {
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(s).build();
+			return this;
+		}
+
+		public CsvRowDataSerializationSchema build() {
+			return new CsvRowDataSerializationSchema(
+				rowType,
+				csvSchema);
+		}
+	}
+
+	@Override
+	public byte[] serialize(RowData row) {
+		if (root == null) {
+			root = csvMapper.createObjectNode();
+		}
+		try {
+			runtimeConverter.convert(csvMapper, root, row);
+			return objectWriter.writeValueAsBytes(root);
+		} catch (Throwable t) {
+			throw new RuntimeException("Could not serialize row '" + row + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		if (this == o) {
+			return true;
+		}
+		final CsvRowDataSerializationSchema that = (CsvRowDataSerializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return rowType.equals(that.rowType) &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			Arrays.equals(csvSchema.getLineSeparator(), otherSchema.getLineSeparator()) &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			rowType,
+			csvSchema.getColumnSeparator(),
+			csvSchema.getLineSeparator(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// --------------------------------------------------------------------------------
+	// Runtime Converters
+	// --------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts objects of Flink Table & SQL internal data structures
+	 * to corresponding {@link JsonNode}s.
+	 */
+	private interface SerializationRuntimeConverter extends Serializable {

Review comment:
       I think you should use something like `ParquetRowDataWriter.FieldWriter`.

##########
File path: flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java
##########
@@ -0,0 +1,458 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectReader;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.lang.reflect.Array;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.LocalTime;
+import java.util.Arrays;
+import java.util.Objects;
+
+/**
+ * Deserialization schema from CSV to Flink Table & SQL internal data structures.
+ *
+ * <p>Deserializes a <code>byte[]</code> message as a {@link JsonNode} and
+ * converts it to {@link RowData}.
+ *
+ * <p>Failure during deserialization are forwarded as wrapped {@link IOException}s.
+ */
+@Internal
+public final class CsvRowDataDeserializationSchema implements DeserializationSchema<RowData> {
+
+	private static final long serialVersionUID = 1L;
+
+	/** Type information describing the result type. */
+	private final TypeInformation<RowData> resultTypeInfo;
+
+	/** Runtime instance that performs the actual work. */
+	private final DeserializationRuntimeConverter runtimeConverter;
+
+	/** Schema describing the input CSV data. */
+	private final CsvSchema csvSchema;
+
+	/** Object reader used to read rows. It is configured by {@link CsvSchema}. */
+	private final ObjectReader objectReader;
+
+	/** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */
+	private final boolean ignoreParseErrors;
+
+	private CsvRowDataDeserializationSchema(
+			RowType rowType,
+			TypeInformation<RowData> resultTypeInfo,
+			CsvSchema csvSchema,
+			boolean ignoreParseErrors) {
+		this.resultTypeInfo = resultTypeInfo;
+		this.runtimeConverter = createRowConverter(rowType, true);
+		this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		this.objectReader = new CsvMapper().readerFor(JsonNode.class).with(csvSchema);
+		this.ignoreParseErrors = ignoreParseErrors;
+	}
+
+	/**
+	 * A builder for creating a {@link CsvRowDataDeserializationSchema}.
+	 */
+	@Internal
+	public static class Builder {
+
+		private final RowType rowType;
+		private final TypeInformation<RowData> resultTypeInfo;
+		private CsvSchema csvSchema;
+		private boolean ignoreParseErrors;
+
+		/**
+		 * Creates a CSV deserialization schema for the given {@link TypeInformation} with
+		 * optional parameters.
+		 */
+		public Builder(RowType rowType, TypeInformation<RowData> resultTypeInfo) {
+			Preconditions.checkNotNull(rowType, "RowType must not be null.");
+			Preconditions.checkNotNull(resultTypeInfo, "Result type information must not be null.");
+			this.rowType = rowType;
+			this.resultTypeInfo = resultTypeInfo;
+			this.csvSchema = CsvRowSchemaConverter.convert(rowType);
+		}
+
+		public Builder setFieldDelimiter(char delimiter) {
+			this.csvSchema = this.csvSchema.rebuild().setColumnSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setAllowComments(boolean allowComments) {
+			this.csvSchema = this.csvSchema.rebuild().setAllowComments(allowComments).build();
+			return this;
+		}
+
+		public Builder setArrayElementDelimiter(String delimiter) {
+			Preconditions.checkNotNull(delimiter, "Array element delimiter must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setArrayElementSeparator(delimiter).build();
+			return this;
+		}
+
+		public Builder setQuoteCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setQuoteChar(c).build();
+			return this;
+		}
+
+		public Builder setEscapeCharacter(char c) {
+			this.csvSchema = this.csvSchema.rebuild().setEscapeChar(c).build();
+			return this;
+		}
+
+		public Builder setNullLiteral(String nullLiteral) {
+			Preconditions.checkNotNull(nullLiteral, "Null literal must not be null.");
+			this.csvSchema = this.csvSchema.rebuild().setNullValue(nullLiteral).build();
+			return this;
+		}
+
+		public Builder setIgnoreParseErrors(boolean ignoreParseErrors) {
+			this.ignoreParseErrors = ignoreParseErrors;
+			return this;
+		}
+
+		public CsvRowDataDeserializationSchema build() {
+			return new CsvRowDataDeserializationSchema(
+				rowType,
+				resultTypeInfo,
+				csvSchema,
+				ignoreParseErrors);
+		}
+	}
+
+	@Override
+	public RowData deserialize(byte[] message) throws IOException {
+		try {
+			final JsonNode root = objectReader.readValue(message);
+			return (RowData) runtimeConverter.convert(root);
+		} catch (Throwable t) {
+			if (ignoreParseErrors) {
+				return null;
+			}
+			throw new IOException("Failed to deserialize CSV row '" + new String(message) + "'.", t);
+		}
+	}
+
+	@Override
+	public boolean isEndOfStream(RowData nextElement) {
+		return false;
+	}
+
+	@Override
+	public TypeInformation<RowData> getProducedType() {
+		return resultTypeInfo;
+	}
+
+	@Override
+	public boolean equals(Object o) {
+		if (this == o) {
+			return true;
+		}
+		if (o == null || o.getClass() != this.getClass()) {
+			return false;
+		}
+		final CsvRowDataDeserializationSchema that = (CsvRowDataDeserializationSchema) o;
+		final CsvSchema otherSchema = that.csvSchema;
+
+		return resultTypeInfo.equals(that.resultTypeInfo) &&
+			ignoreParseErrors == that.ignoreParseErrors &&
+			csvSchema.getColumnSeparator() == otherSchema.getColumnSeparator() &&
+			csvSchema.allowsComments() == otherSchema.allowsComments() &&
+			csvSchema.getArrayElementSeparator().equals(otherSchema.getArrayElementSeparator()) &&
+			csvSchema.getQuoteChar() == otherSchema.getQuoteChar() &&
+			csvSchema.getEscapeChar() == otherSchema.getEscapeChar() &&
+			Arrays.equals(csvSchema.getNullValue(), otherSchema.getNullValue());
+	}
+
+	@Override
+	public int hashCode() {
+		return Objects.hash(
+			resultTypeInfo,
+			ignoreParseErrors,
+			csvSchema.getColumnSeparator(),
+			csvSchema.allowsComments(),
+			csvSchema.getArrayElementSeparator(),
+			csvSchema.getQuoteChar(),
+			csvSchema.getEscapeChar(),
+			csvSchema.getNullValue());
+	}
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	// -------------------------------------------------------------------------------------
+	// Runtime Converters
+	// -------------------------------------------------------------------------------------
+
+	/**
+	 * Runtime converter that converts {@link JsonNode}s into objects of Flink Table & SQL
+	 * internal data structures.
+	 */
+	@FunctionalInterface
+	private interface DeserializationRuntimeConverter extends Serializable {
+		Object convert(JsonNode jsonNode);
+	}
+
+	private DeserializationRuntimeConverter createRowConverter(RowType rowType, boolean isTopLevel) {
+		final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
+			.map(RowType.RowField::getType)
+			.map(this::createNullableConverter)
+			.toArray(DeserializationRuntimeConverter[]::new);
+		final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);
+		final int arity = fieldNames.length;
+
+		return jsonNode -> {
+			int nodeSize = jsonNode.size();
+
+			validateArity(arity, nodeSize, ignoreParseErrors);
+
+			GenericRowData row = new GenericRowData(arity);
+			for (int i = 0; i < arity; i++) {
+				JsonNode field;
+				// Jackson only supports mapping by name in the first level
+				if (isTopLevel) {
+					field = jsonNode.get(fieldNames[i]);
+				} else {
+					field = jsonNode.get(i);
+				}
+				if (field == null) {
+					row.setField(i, null);
+				} else {
+					row.setField(i, fieldConverters[i].convert(field));
+				}
+			}
+			return row;
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which is null safe.
+	 */
+	private DeserializationRuntimeConverter createNullableConverter(LogicalType type) {
+		final DeserializationRuntimeConverter converter = createConverter(type);
+		return jsonNode -> {
+			if (jsonNode == null || jsonNode.isNull()) {
+				return null;
+			}
+			try {
+				return converter.convert(jsonNode);
+			} catch (Throwable t) {
+				if (!ignoreParseErrors) {
+					throw t;
+				}
+				return null;
+			}
+		};
+	}
+
+	/**
+	 * Creates a runtime converter which assuming input object is not null.
+	 */
+	private DeserializationRuntimeConverter createConverter(LogicalType type) {
+		switch (type.getTypeRoot()) {
+			case NULL:
+				return jsonNode -> null;
+			case BOOLEAN:
+				return this::convertToBoolean;
+			case TINYINT:
+				return jsonNode -> Byte.parseByte(jsonNode.asText().trim());
+			case SMALLINT:
+				return jsonNode -> Short.parseShort(jsonNode.asText().trim());
+			case INTEGER:
+			case INTERVAL_YEAR_MONTH:
+				return this::convertToInt;
+			case BIGINT:
+			case INTERVAL_DAY_TIME:
+				return this::convertToLong;
+			case DATE:
+				return this::convertToDate;
+			case TIME_WITHOUT_TIME_ZONE:
+				return this::convertToTime;
+			case TIMESTAMP_WITH_TIME_ZONE:
+			case TIMESTAMP_WITHOUT_TIME_ZONE:
+				return this::convertToTimestamp;
+			case FLOAT:
+				return this::convertToFloat;
+			case DOUBLE:
+				return this::convertToDouble;
+			case CHAR:
+			case VARCHAR:
+				return this::convertToString;
+			case BINARY:
+			case VARBINARY:
+				return this::convertToBytes;
+			case DECIMAL:
+				return createDecimalConverter((DecimalType) type);
+			case ARRAY:
+				return createArrayConverter((ArrayType) type);
+			case ROW:
+				return createRowConverter((RowType) type, false);
+			case MAP:
+			case MULTISET:
+			case RAW:
+			default:
+				throw new UnsupportedOperationException("Unsupported type: " + type);
+		}
+	}
+
+	private boolean convertToBoolean(JsonNode jsonNode) {
+		if (jsonNode.isBoolean()) {
+			// avoid redundant toString and parseBoolean, for better performance
+			return jsonNode.asBoolean();
+		} else {
+			return Boolean.parseBoolean(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToInt(JsonNode jsonNode) {
+		if (jsonNode.canConvertToInt()) {
+			// avoid redundant toString and parseInt, for better performance
+			return jsonNode.asInt();
+		} else {
+			return Integer.parseInt(jsonNode.asText().trim());
+		}
+	}
+
+	private long convertToLong(JsonNode jsonNode) {
+		if (jsonNode.canConvertToLong()) {
+			// avoid redundant toString and parseLong, for better performance
+			return jsonNode.asLong();
+		} else {
+			return Long.parseLong(jsonNode.asText().trim());
+		}
+	}
+
+	private double convertToDouble(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return jsonNode.asDouble();
+		} else {
+			return Double.parseDouble(jsonNode.asText().trim());
+		}
+	}
+
+	private float convertToFloat(JsonNode jsonNode) {
+		if (jsonNode.isDouble()) {
+			// avoid redundant toString and parseDouble, for better performance
+			return (float) jsonNode.asDouble();
+		} else {
+			return Float.parseFloat(jsonNode.asText().trim());
+		}
+	}
+
+	private int convertToDate(JsonNode jsonNode) {
+		// csv currently is using Date.valueOf() to parse date string
+		return (int) Date.valueOf(jsonNode.asText()).toLocalDate().toEpochDay();
+	}
+
+	private int convertToTime(JsonNode jsonNode) {
+		// csv currently is using Time.valueOf() to parse time string
+		LocalTime localTime = Time.valueOf(jsonNode.asText()).toLocalTime();
+		// get number of milliseconds of the day
+		return localTime.toSecondOfDay() * 1000;

Review comment:
       Loose mills precision here, use `toNanoOfDay`?
   BTW, add tests for this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org