You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by zentol <gi...@git.apache.org> on 2015/10/19 00:34:45 UTC

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

GitHub user zentol opened a pull request:

    https://github.com/apache/flink/pull/1266

    [FLINK-2692] Untangle CsvInputFormat

    This PR splits the CsvInputFormat into a Tuple and POJO Version. To this end, The (Common)CsvInputFormat classes were merged, and the type specific portions refactored into separate classes.
    
    Additionally, the ScalaCsvInputFormat has been removed; Java and Scala API now use the same InputFormats. Previously, the formats differed in the way they created the output tuples; this is now realized in a newly introduced abstract method "createOrReuseInstance(Object[] fieldValues, T reuse)" within the TupleSerializerBase.
    
    Fields to include and field names are no longer passed via setters, but instead via the contructor. Several new contructors were added to accommodate different use cases, along with 2 new static methods to generate a default include mask, or convert an indice int[] list to a boolean include mask.
    
    Classes no longer have to be passed separately, as they are extracted from the typeinformation object.
    
    A few sanity checks were moved from the ExecEnvironment to the InputFormat.
    
    The testReadSparseWithShuffledPositions Test was removed since monotonous order of field indices is, and afaik was, not actually necessary due to the way it was converted to a boolean[].

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/zentol/flink 2692_csv

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/1266.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1266
    
----
commit d497415adc2e58b4e9912ae89a53444825416366
Author: zentol <s....@web.de>
Date:   2015-10-18T18:23:23Z

    [FLINK-2692] Untangle CsvInputFormat

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-152592464
  
    yes, the user facing API is unchanged; it's only a problem for those that uses the CsvInputFormat directly for a createInput() call or whatever.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1266#discussion_r44435787
  
    --- Diff: flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
    @@ -18,32 +18,97 @@
     
     package org.apache.flink.api.java.io;
     
    +import com.google.common.base.Preconditions;
    +import com.google.common.primitives.Ints;
    +import org.apache.flink.api.common.io.GenericCsvInputFormat;
    +import org.apache.flink.core.fs.FileInputSplit;
    +import org.apache.flink.types.parser.FieldParser;
     
    -import org.apache.flink.api.common.typeutils.CompositeType;
    -import org.apache.flink.api.java.tuple.Tuple;
    +import java.io.IOException;
     import org.apache.flink.core.fs.Path;
     import org.apache.flink.util.StringUtils;
     
    -public class CsvInputFormat<OUT> extends CommonCsvInputFormat<OUT> {
    +public abstract class CsvInputFormat<OUT> extends GenericCsvInputFormat<OUT> {
     
     	private static final long serialVersionUID = 1L;
    +
    +	public static final String DEFAULT_LINE_DELIMITER = "\n";
    +
    +	public static final String DEFAULT_FIELD_DELIMITER = ",";
    +
    +	protected transient Object[] parsedValues;
     	
    -	public CsvInputFormat(Path filePath, CompositeType<OUT> typeInformation) {
    -		super(filePath, typeInformation);
    +	protected CsvInputFormat(Path filePath) {
    +		super(filePath);
     	}
    -	
    -	public CsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, CompositeType<OUT> typeInformation) {
    -		super(filePath, lineDelimiter, fieldDelimiter, typeInformation);
    +
    +	@Override
    +	public void open(FileInputSplit split) throws IOException {
    +		super.open(split);
    +
    +		@SuppressWarnings("unchecked")
    +		FieldParser<Object>[] fieldParsers = (FieldParser<Object>[]) getFieldParsers();
    +
    +		//throw exception if no field parsers are available
    +		if (fieldParsers.length == 0) {
    +			throw new IOException("CsvInputFormat.open(FileInputSplit split) - no field parsers to parse input");
    +		}
    +
    +		// create the value holders
    +		this.parsedValues = new Object[fieldParsers.length];
    +		for (int i = 0; i < fieldParsers.length; i++) {
    +			this.parsedValues[i] = fieldParsers[i].createValue();
    +		}
    +
    +		// left to right evaluation makes access [0] okay
    +		// this marker is used to fasten up readRecord, so that it doesn't have to check each call if the line ending is set to default
    +		if (this.getDelimiter().length == 1 && this.getDelimiter()[0] == '\n' ) {
    +			this.lineDelimiterIsLinebreak = true;
    +		}
    +
    +		this.commentCount = 0;
    +		this.invalidLineCount = 0;
     	}
     
     	@Override
    -	protected OUT createTuple(OUT reuse) {
    -		Tuple result = (Tuple) reuse;
    -		for (int i = 0; i < parsedValues.length; i++) {
    -			result.setField(parsedValues[i], i);
    +	public OUT nextRecord(OUT record) throws IOException {
    +		OUT returnRecord = null;
    +		do {
    +			returnRecord = super.nextRecord(record);
    +		} while (returnRecord == null && !reachedEnd());
    +
    +		return returnRecord;
    +	}
    +
    +	public Class<?>[] getFieldTypes() {
    +		return super.getGenericFieldTypes();
    +	}
    +
    +	protected static boolean[] createDefaultMask(int size) {
    --- End diff --
    
    I wanted to cover that case directly in the InputFormat instead of *somewhere* else. This method is used to create a mask for exactly that case, when we can infer the mask from the number of field types.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1266#discussion_r44438028
  
    --- Diff: flink-java/src/main/java/org/apache/flink/api/java/io/PojoCsvInputFormat.java ---
    @@ -0,0 +1,232 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.flink.api.java.io;
    +
    +import com.google.common.base.Preconditions;
    +import org.apache.flink.api.java.typeutils.PojoTypeInfo;
    +import org.apache.flink.core.fs.FileInputSplit;
    +import org.apache.flink.core.fs.Path;
    +
    +import java.io.IOException;
    +import java.lang.reflect.Field;
    +import java.util.Arrays;
    +import java.util.HashMap;
    +import java.util.Map;
    +
    +public class PojoCsvInputFormat<OUT> extends CsvInputFormat<OUT> {
    +
    +	private static final long serialVersionUID = 1L;
    +
    +	private Class<OUT> pojoTypeClass;
    +
    +	private String[] pojoFieldNames;
    +
    +	private transient PojoTypeInfo<OUT> pojoTypeInfo;
    +	private transient Field[] pojoFields;
    +
    +	public PojoCsvInputFormat(Path filePath, PojoTypeInfo<OUT> pojoTypeInfo) {
    +		this(filePath, DEFAULT_LINE_DELIMITER, DEFAULT_FIELD_DELIMITER, pojoTypeInfo);
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, PojoTypeInfo<OUT> pojoTypeInfo, String[] fieldNames) {
    +		this(filePath, DEFAULT_LINE_DELIMITER, DEFAULT_FIELD_DELIMITER, pojoTypeInfo, fieldNames, createDefaultMask(pojoTypeInfo.getArity()));
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, PojoTypeInfo<OUT> pojoTypeInfo) {
    +		this(filePath, lineDelimiter, fieldDelimiter, pojoTypeInfo, pojoTypeInfo.getFieldNames(), createDefaultMask(pojoTypeInfo.getArity()));
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, PojoTypeInfo<OUT> pojoTypeInfo, String[] fieldNames) {
    +		this(filePath, lineDelimiter, fieldDelimiter, pojoTypeInfo, fieldNames, createDefaultMask(fieldNames.length));
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, PojoTypeInfo<OUT> pojoTypeInfo, int[] includedFieldsMask) {
    +		this(filePath, DEFAULT_LINE_DELIMITER, DEFAULT_FIELD_DELIMITER, pojoTypeInfo, pojoTypeInfo.getFieldNames(), toBooleanMask(includedFieldsMask));
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, PojoTypeInfo<OUT> pojoTypeInfo, String[] fieldNames, int[] includedFieldsMask) {
    +		this(filePath, DEFAULT_LINE_DELIMITER, DEFAULT_FIELD_DELIMITER, pojoTypeInfo, fieldNames, includedFieldsMask);
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, PojoTypeInfo<OUT> pojoTypeInfo, int[] includedFieldsMask) {
    +		this(filePath, lineDelimiter, fieldDelimiter, pojoTypeInfo, pojoTypeInfo.getFieldNames(), includedFieldsMask);
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, PojoTypeInfo<OUT> pojoTypeInfo, String[] fieldNames, int[] includedFieldsMask) {
    +		super(filePath);
    +		boolean[] mask = (includedFieldsMask == null)
    +				? createDefaultMask(fieldNames.length)
    +				: toBooleanMask(includedFieldsMask);
    +		configure(lineDelimiter, fieldDelimiter, pojoTypeInfo, fieldNames, mask);
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, PojoTypeInfo<OUT> pojoTypeInfo, boolean[] includedFieldsMask) {
    +		this(filePath, DEFAULT_LINE_DELIMITER, DEFAULT_FIELD_DELIMITER, pojoTypeInfo, pojoTypeInfo.getFieldNames(), includedFieldsMask);
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, PojoTypeInfo<OUT> pojoTypeInfo, String[] fieldNames, boolean[] includedFieldsMask) {
    +		this(filePath, DEFAULT_LINE_DELIMITER, DEFAULT_FIELD_DELIMITER, pojoTypeInfo, fieldNames, includedFieldsMask);
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, PojoTypeInfo<OUT> pojoTypeInfo, boolean[] includedFieldsMask) {
    +		this(filePath, lineDelimiter, fieldDelimiter, pojoTypeInfo, pojoTypeInfo.getFieldNames(), includedFieldsMask);
    +	}
    +
    +	public PojoCsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, PojoTypeInfo<OUT> pojoTypeInfo, String[] fieldNames, boolean[] includedFieldsMask) {
    +		super(filePath);
    +		configure(lineDelimiter, fieldDelimiter, pojoTypeInfo, fieldNames, includedFieldsMask);
    +	}
    +
    +	private void configure(String lineDelimiter, String fieldDelimiter, PojoTypeInfo<OUT> pojoTypeInfo, String[] fieldNames, boolean[] includedFieldsMask) {
    +
    +		if (includedFieldsMask == null) {
    +			includedFieldsMask = new boolean[fieldNames.length];
    --- End diff --
    
    use `createDefaultMask()`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-155515305
  
    I agree with @aljoscha and you on the `readRecord()` code. It would be nice to have the common parts of `readRecord()` in the `CsvInputFormat` and specific `fillRecord` in the tuple and POJO formats.
    
    Otherwise, the PR looks really good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1266#discussion_r44434150
  
    --- Diff: flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
    @@ -18,32 +18,97 @@
     
     package org.apache.flink.api.java.io;
     
    +import com.google.common.base.Preconditions;
    +import com.google.common.primitives.Ints;
    +import org.apache.flink.api.common.io.GenericCsvInputFormat;
    +import org.apache.flink.core.fs.FileInputSplit;
    +import org.apache.flink.types.parser.FieldParser;
     
    -import org.apache.flink.api.common.typeutils.CompositeType;
    -import org.apache.flink.api.java.tuple.Tuple;
    +import java.io.IOException;
     import org.apache.flink.core.fs.Path;
     import org.apache.flink.util.StringUtils;
     
    -public class CsvInputFormat<OUT> extends CommonCsvInputFormat<OUT> {
    +public abstract class CsvInputFormat<OUT> extends GenericCsvInputFormat<OUT> {
     
     	private static final long serialVersionUID = 1L;
    +
    +	public static final String DEFAULT_LINE_DELIMITER = "\n";
    +
    +	public static final String DEFAULT_FIELD_DELIMITER = ",";
    +
    +	protected transient Object[] parsedValues;
     	
    -	public CsvInputFormat(Path filePath, CompositeType<OUT> typeInformation) {
    -		super(filePath, typeInformation);
    +	protected CsvInputFormat(Path filePath) {
    +		super(filePath);
     	}
    -	
    -	public CsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, CompositeType<OUT> typeInformation) {
    -		super(filePath, lineDelimiter, fieldDelimiter, typeInformation);
    +
    +	@Override
    +	public void open(FileInputSplit split) throws IOException {
    +		super.open(split);
    +
    +		@SuppressWarnings("unchecked")
    +		FieldParser<Object>[] fieldParsers = (FieldParser<Object>[]) getFieldParsers();
    +
    +		//throw exception if no field parsers are available
    +		if (fieldParsers.length == 0) {
    +			throw new IOException("CsvInputFormat.open(FileInputSplit split) - no field parsers to parse input");
    +		}
    +
    +		// create the value holders
    +		this.parsedValues = new Object[fieldParsers.length];
    +		for (int i = 0; i < fieldParsers.length; i++) {
    +			this.parsedValues[i] = fieldParsers[i].createValue();
    +		}
    +
    +		// left to right evaluation makes access [0] okay
    +		// this marker is used to fasten up readRecord, so that it doesn't have to check each call if the line ending is set to default
    +		if (this.getDelimiter().length == 1 && this.getDelimiter()[0] == '\n' ) {
    +			this.lineDelimiterIsLinebreak = true;
    +		}
    +
    +		this.commentCount = 0;
    +		this.invalidLineCount = 0;
     	}
     
     	@Override
    -	protected OUT createTuple(OUT reuse) {
    -		Tuple result = (Tuple) reuse;
    -		for (int i = 0; i < parsedValues.length; i++) {
    -			result.setField(parsedValues[i], i);
    +	public OUT nextRecord(OUT record) throws IOException {
    +		OUT returnRecord = null;
    +		do {
    +			returnRecord = super.nextRecord(record);
    +		} while (returnRecord == null && !reachedEnd());
    +
    +		return returnRecord;
    +	}
    +
    +	public Class<?>[] getFieldTypes() {
    +		return super.getGenericFieldTypes();
    +	}
    +
    +	protected static boolean[] createDefaultMask(int size) {
    --- End diff --
    
    Isn't the default that fields are read one after the other from the start of a line?
    Why do we need this method then?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1266#discussion_r44439588
  
    --- Diff: flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
    @@ -18,32 +18,97 @@
     
     package org.apache.flink.api.java.io;
     
    +import com.google.common.base.Preconditions;
    +import com.google.common.primitives.Ints;
    +import org.apache.flink.api.common.io.GenericCsvInputFormat;
    +import org.apache.flink.core.fs.FileInputSplit;
    +import org.apache.flink.types.parser.FieldParser;
     
    -import org.apache.flink.api.common.typeutils.CompositeType;
    -import org.apache.flink.api.java.tuple.Tuple;
    +import java.io.IOException;
     import org.apache.flink.core.fs.Path;
     import org.apache.flink.util.StringUtils;
     
    -public class CsvInputFormat<OUT> extends CommonCsvInputFormat<OUT> {
    +public abstract class CsvInputFormat<OUT> extends GenericCsvInputFormat<OUT> {
     
     	private static final long serialVersionUID = 1L;
    +
    +	public static final String DEFAULT_LINE_DELIMITER = "\n";
    +
    +	public static final String DEFAULT_FIELD_DELIMITER = ",";
    +
    +	protected transient Object[] parsedValues;
     	
    -	public CsvInputFormat(Path filePath, CompositeType<OUT> typeInformation) {
    -		super(filePath, typeInformation);
    +	protected CsvInputFormat(Path filePath) {
    +		super(filePath);
     	}
    -	
    -	public CsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, CompositeType<OUT> typeInformation) {
    -		super(filePath, lineDelimiter, fieldDelimiter, typeInformation);
    +
    +	@Override
    +	public void open(FileInputSplit split) throws IOException {
    +		super.open(split);
    +
    +		@SuppressWarnings("unchecked")
    +		FieldParser<Object>[] fieldParsers = (FieldParser<Object>[]) getFieldParsers();
    +
    +		//throw exception if no field parsers are available
    +		if (fieldParsers.length == 0) {
    +			throw new IOException("CsvInputFormat.open(FileInputSplit split) - no field parsers to parse input");
    +		}
    +
    +		// create the value holders
    +		this.parsedValues = new Object[fieldParsers.length];
    +		for (int i = 0; i < fieldParsers.length; i++) {
    +			this.parsedValues[i] = fieldParsers[i].createValue();
    +		}
    +
    +		// left to right evaluation makes access [0] okay
    +		// this marker is used to fasten up readRecord, so that it doesn't have to check each call if the line ending is set to default
    +		if (this.getDelimiter().length == 1 && this.getDelimiter()[0] == '\n' ) {
    +			this.lineDelimiterIsLinebreak = true;
    +		}
    +
    +		this.commentCount = 0;
    +		this.invalidLineCount = 0;
     	}
     
     	@Override
    -	protected OUT createTuple(OUT reuse) {
    -		Tuple result = (Tuple) reuse;
    -		for (int i = 0; i < parsedValues.length; i++) {
    -			result.setField(parsedValues[i], i);
    +	public OUT nextRecord(OUT record) throws IOException {
    +		OUT returnRecord = null;
    +		do {
    +			returnRecord = super.nextRecord(record);
    +		} while (returnRecord == null && !reachedEnd());
    +
    +		return returnRecord;
    +	}
    +
    +	public Class<?>[] getFieldTypes() {
    +		return super.getGenericFieldTypes();
    +	}
    +
    +	protected static boolean[] createDefaultMask(int size) {
    +		boolean[] includedMask = new boolean[size];
    +		for (int x=0; x<includedMask.length; x++) {
    +			includedMask[x] = true;
    +		}
    +		return includedMask;
    +	}
    +
    +	protected static boolean[] toBooleanMask(int[] sourceFieldIndices) {
    --- End diff --
    
    I see your point, but don't think it's due to this methods. It follows a similar implementation in GenericCsvInputFormat.setFieldsGeneric that was used until now.
    
    The key thing is that previously we checked the indices for a monotonous order, so the case you described couldn't occur. That check wasn't technically necessary, hence i removed it.
    
    We can either re-add that check, or add documentation to the CsvInputFormat constructor and Scala ExecutionEnvironment.readCsvFile method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1266#discussion_r44441239
  
    --- Diff: flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
    @@ -18,32 +18,97 @@
     
     package org.apache.flink.api.java.io;
     
    +import com.google.common.base.Preconditions;
    +import com.google.common.primitives.Ints;
    +import org.apache.flink.api.common.io.GenericCsvInputFormat;
    +import org.apache.flink.core.fs.FileInputSplit;
    +import org.apache.flink.types.parser.FieldParser;
     
    -import org.apache.flink.api.common.typeutils.CompositeType;
    -import org.apache.flink.api.java.tuple.Tuple;
    +import java.io.IOException;
     import org.apache.flink.core.fs.Path;
     import org.apache.flink.util.StringUtils;
     
    -public class CsvInputFormat<OUT> extends CommonCsvInputFormat<OUT> {
    +public abstract class CsvInputFormat<OUT> extends GenericCsvInputFormat<OUT> {
     
     	private static final long serialVersionUID = 1L;
    +
    +	public static final String DEFAULT_LINE_DELIMITER = "\n";
    +
    +	public static final String DEFAULT_FIELD_DELIMITER = ",";
    +
    +	protected transient Object[] parsedValues;
     	
    -	public CsvInputFormat(Path filePath, CompositeType<OUT> typeInformation) {
    -		super(filePath, typeInformation);
    +	protected CsvInputFormat(Path filePath) {
    +		super(filePath);
     	}
    -	
    -	public CsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, CompositeType<OUT> typeInformation) {
    -		super(filePath, lineDelimiter, fieldDelimiter, typeInformation);
    +
    +	@Override
    +	public void open(FileInputSplit split) throws IOException {
    +		super.open(split);
    +
    +		@SuppressWarnings("unchecked")
    +		FieldParser<Object>[] fieldParsers = (FieldParser<Object>[]) getFieldParsers();
    +
    +		//throw exception if no field parsers are available
    +		if (fieldParsers.length == 0) {
    +			throw new IOException("CsvInputFormat.open(FileInputSplit split) - no field parsers to parse input");
    +		}
    +
    +		// create the value holders
    +		this.parsedValues = new Object[fieldParsers.length];
    +		for (int i = 0; i < fieldParsers.length; i++) {
    +			this.parsedValues[i] = fieldParsers[i].createValue();
    +		}
    +
    +		// left to right evaluation makes access [0] okay
    +		// this marker is used to fasten up readRecord, so that it doesn't have to check each call if the line ending is set to default
    +		if (this.getDelimiter().length == 1 && this.getDelimiter()[0] == '\n' ) {
    +			this.lineDelimiterIsLinebreak = true;
    +		}
    +
    +		this.commentCount = 0;
    +		this.invalidLineCount = 0;
     	}
     
     	@Override
    -	protected OUT createTuple(OUT reuse) {
    -		Tuple result = (Tuple) reuse;
    -		for (int i = 0; i < parsedValues.length; i++) {
    -			result.setField(parsedValues[i], i);
    +	public OUT nextRecord(OUT record) throws IOException {
    +		OUT returnRecord = null;
    +		do {
    +			returnRecord = super.nextRecord(record);
    +		} while (returnRecord == null && !reachedEnd());
    +
    +		return returnRecord;
    +	}
    +
    +	public Class<?>[] getFieldTypes() {
    +		return super.getGenericFieldTypes();
    +	}
    +
    +	protected static boolean[] createDefaultMask(int size) {
    +		boolean[] includedMask = new boolean[size];
    +		for (int x=0; x<includedMask.length; x++) {
    +			includedMask[x] = true;
    +		}
    +		return includedMask;
    +	}
    +
    +	protected static boolean[] toBooleanMask(int[] sourceFieldIndices) {
    --- End diff --
    
    My mistake, I thought the static methods would be publicly accessible. It of course good to have these methods internally.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by aljoscha <gi...@git.apache.org>.
Github user aljoscha commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-152585852
  
    Could, yes, but I'm fine with both, it is already cleaner than what we had before.
    
    The user facing API is not changed, right? So I think this is good to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-157662815
  
    any other comments? I'll merge it other wise later on.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-152584371
  
    We could move the differing lines (Csv:L114 Pojo:L218->L225) into separate methods that is called from a generic readRecord() method. something like fillRecord(OUT reuse, Object[] parsedValues).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1266#discussion_r44435810
  
    --- Diff: flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
    @@ -18,32 +18,97 @@
     
     package org.apache.flink.api.java.io;
     
    +import com.google.common.base.Preconditions;
    +import com.google.common.primitives.Ints;
    +import org.apache.flink.api.common.io.GenericCsvInputFormat;
    +import org.apache.flink.core.fs.FileInputSplit;
    +import org.apache.flink.types.parser.FieldParser;
     
    -import org.apache.flink.api.common.typeutils.CompositeType;
    -import org.apache.flink.api.java.tuple.Tuple;
    +import java.io.IOException;
     import org.apache.flink.core.fs.Path;
     import org.apache.flink.util.StringUtils;
     
    -public class CsvInputFormat<OUT> extends CommonCsvInputFormat<OUT> {
    +public abstract class CsvInputFormat<OUT> extends GenericCsvInputFormat<OUT> {
     
     	private static final long serialVersionUID = 1L;
    +
    +	public static final String DEFAULT_LINE_DELIMITER = "\n";
    +
    +	public static final String DEFAULT_FIELD_DELIMITER = ",";
    +
    +	protected transient Object[] parsedValues;
     	
    -	public CsvInputFormat(Path filePath, CompositeType<OUT> typeInformation) {
    -		super(filePath, typeInformation);
    +	protected CsvInputFormat(Path filePath) {
    +		super(filePath);
     	}
    -	
    -	public CsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, CompositeType<OUT> typeInformation) {
    -		super(filePath, lineDelimiter, fieldDelimiter, typeInformation);
    +
    +	@Override
    +	public void open(FileInputSplit split) throws IOException {
    +		super.open(split);
    +
    +		@SuppressWarnings("unchecked")
    +		FieldParser<Object>[] fieldParsers = (FieldParser<Object>[]) getFieldParsers();
    +
    +		//throw exception if no field parsers are available
    +		if (fieldParsers.length == 0) {
    +			throw new IOException("CsvInputFormat.open(FileInputSplit split) - no field parsers to parse input");
    +		}
    +
    +		// create the value holders
    +		this.parsedValues = new Object[fieldParsers.length];
    +		for (int i = 0; i < fieldParsers.length; i++) {
    +			this.parsedValues[i] = fieldParsers[i].createValue();
    +		}
    +
    +		// left to right evaluation makes access [0] okay
    +		// this marker is used to fasten up readRecord, so that it doesn't have to check each call if the line ending is set to default
    +		if (this.getDelimiter().length == 1 && this.getDelimiter()[0] == '\n' ) {
    +			this.lineDelimiterIsLinebreak = true;
    +		}
    +
    +		this.commentCount = 0;
    +		this.invalidLineCount = 0;
     	}
     
     	@Override
    -	protected OUT createTuple(OUT reuse) {
    -		Tuple result = (Tuple) reuse;
    -		for (int i = 0; i < parsedValues.length; i++) {
    -			result.setField(parsedValues[i], i);
    +	public OUT nextRecord(OUT record) throws IOException {
    +		OUT returnRecord = null;
    +		do {
    +			returnRecord = super.nextRecord(record);
    +		} while (returnRecord == null && !reachedEnd());
    +
    +		return returnRecord;
    +	}
    +
    +	public Class<?>[] getFieldTypes() {
    +		return super.getGenericFieldTypes();
    +	}
    +
    +	protected static boolean[] createDefaultMask(int size) {
    --- End diff --
    
    *cover it in an obvious manner


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-157702242
  
    Merging this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by zentol <gi...@git.apache.org>.
Github user zentol commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-155764846
  
    @fhueske I've addressed your comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by aljoscha <gi...@git.apache.org>.
Github user aljoscha commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-152544715
  
    Looks good to me. Is there maybe a way to also make `readRecord()` common to tuples and POJOs, because there we duplicate all the code and only the call that creates the tuple or POJO (or fills it) is different, if I'm not mistaken.
    
    Thanks for the work you're putting in there, I know it's not glorious but some parts need cleanup/refactoring. :smile: 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1266#discussion_r44434469
  
    --- Diff: flink-java/src/main/java/org/apache/flink/api/java/io/CsvInputFormat.java ---
    @@ -18,32 +18,97 @@
     
     package org.apache.flink.api.java.io;
     
    +import com.google.common.base.Preconditions;
    +import com.google.common.primitives.Ints;
    +import org.apache.flink.api.common.io.GenericCsvInputFormat;
    +import org.apache.flink.core.fs.FileInputSplit;
    +import org.apache.flink.types.parser.FieldParser;
     
    -import org.apache.flink.api.common.typeutils.CompositeType;
    -import org.apache.flink.api.java.tuple.Tuple;
    +import java.io.IOException;
     import org.apache.flink.core.fs.Path;
     import org.apache.flink.util.StringUtils;
     
    -public class CsvInputFormat<OUT> extends CommonCsvInputFormat<OUT> {
    +public abstract class CsvInputFormat<OUT> extends GenericCsvInputFormat<OUT> {
     
     	private static final long serialVersionUID = 1L;
    +
    +	public static final String DEFAULT_LINE_DELIMITER = "\n";
    +
    +	public static final String DEFAULT_FIELD_DELIMITER = ",";
    +
    +	protected transient Object[] parsedValues;
     	
    -	public CsvInputFormat(Path filePath, CompositeType<OUT> typeInformation) {
    -		super(filePath, typeInformation);
    +	protected CsvInputFormat(Path filePath) {
    +		super(filePath);
     	}
    -	
    -	public CsvInputFormat(Path filePath, String lineDelimiter, String fieldDelimiter, CompositeType<OUT> typeInformation) {
    -		super(filePath, lineDelimiter, fieldDelimiter, typeInformation);
    +
    +	@Override
    +	public void open(FileInputSplit split) throws IOException {
    +		super.open(split);
    +
    +		@SuppressWarnings("unchecked")
    +		FieldParser<Object>[] fieldParsers = (FieldParser<Object>[]) getFieldParsers();
    +
    +		//throw exception if no field parsers are available
    +		if (fieldParsers.length == 0) {
    +			throw new IOException("CsvInputFormat.open(FileInputSplit split) - no field parsers to parse input");
    +		}
    +
    +		// create the value holders
    +		this.parsedValues = new Object[fieldParsers.length];
    +		for (int i = 0; i < fieldParsers.length; i++) {
    +			this.parsedValues[i] = fieldParsers[i].createValue();
    +		}
    +
    +		// left to right evaluation makes access [0] okay
    +		// this marker is used to fasten up readRecord, so that it doesn't have to check each call if the line ending is set to default
    +		if (this.getDelimiter().length == 1 && this.getDelimiter()[0] == '\n' ) {
    +			this.lineDelimiterIsLinebreak = true;
    +		}
    +
    +		this.commentCount = 0;
    +		this.invalidLineCount = 0;
     	}
     
     	@Override
    -	protected OUT createTuple(OUT reuse) {
    -		Tuple result = (Tuple) reuse;
    -		for (int i = 0; i < parsedValues.length; i++) {
    -			result.setField(parsedValues[i], i);
    +	public OUT nextRecord(OUT record) throws IOException {
    +		OUT returnRecord = null;
    +		do {
    +			returnRecord = super.nextRecord(record);
    +		} while (returnRecord == null && !reachedEnd());
    +
    +		return returnRecord;
    +	}
    +
    +	public Class<?>[] getFieldTypes() {
    +		return super.getGenericFieldTypes();
    +	}
    +
    +	protected static boolean[] createDefaultMask(int size) {
    +		boolean[] includedMask = new boolean[size];
    +		for (int x=0; x<includedMask.length; x++) {
    +			includedMask[x] = true;
    +		}
    +		return includedMask;
    +	}
    +
    +	protected static boolean[] toBooleanMask(int[] sourceFieldIndices) {
    --- End diff --
    
    This method might give the impression that fields can be read in any order, however they are parsed in order of their position. This might lead to unexpected behavior if a user species field indicies out of order, e.g., `int[] {3,1,7,5}`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/flink/pull/1266


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by aljoscha <gi...@git.apache.org>.
Github user aljoscha commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-157675819
  
    I think it's good :+1: 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2692] Untangle CsvInputFormat

Posted by aljoscha <gi...@git.apache.org>.
Github user aljoscha commented on the pull request:

    https://github.com/apache/flink/pull/1266#issuecomment-153024659
  
    I think you can go ahead and merge it then.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---