You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by fhueske <gi...@git.apache.org> on 2014/10/07 15:54:59 UTC

[GitHub] incubator-flink pull request: Added wrappers for Hadoop functions

GitHub user fhueske opened a pull request:

    https://github.com/apache/incubator-flink/pull/143

    Added wrappers for Hadoop functions

    Tests and documentation included. Also tested on cluster.
    
    I hijacked @twalthr's PR #131 to build the documentation of the Hadoop function wrappers on top.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/fhueske/incubator-flink hadoopFunctions

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/incubator-flink/pull/143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #143
    
----
commit df947d9e768da6012ad069364a4769b7e12a51ef
Author: Artem Tsikiridis <ar...@cern.ch>
Date:   2014-05-13T19:31:13Z

    [FLINK-1076] Extend Hadoop compatibility. Added wrappers for stand-alone Map, Reduce, and CombinableReduce functions.

commit 4a62bdb021ce24b16e1a505ad426d5446fcb5d46
Author: Fabian Hueske <fh...@apache.org>
Date:   2014-10-06T08:18:51Z

    [FLINK-1076] Return type determined via type extraction.
    Added test cases.
    Minor improvements and clean-ups.

commit f4be4a0ee68ed3686a305ffabcdfe2d942919590
Author: twalthr <in...@twalthr.com>
Date:   2014-09-29T12:01:17Z

    [FLINK-1107] Hadoop Compatibility Layer documented
    
    This closes #131

commit f1b3f2122a2f9dd16b071bbe9c33ca90d3fb7a2b
Author: Fabian Hueske <fh...@apache.org>
Date:   2014-10-07T12:46:36Z

    [FLINK-1076] Extended Hadoop Compatibility documentation to cover Hadoop functions

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-flink pull request: Added wrappers for Hadoop functions

Posted by rmetzger <gi...@git.apache.org>.
Github user rmetzger commented on a diff in the pull request:

    https://github.com/apache/incubator-flink/pull/143#discussion_r18533375
  
    --- Diff: flink-addons/flink-hadoop-compatibility/src/test/java/org/apache/flink/test/hadoopcompatibility/mapred/HadoopReduceCombineFunctionITCase.java ---
    @@ -0,0 +1,297 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.test.hadoopcompatibility.mapred;
    +
    +import java.io.FileNotFoundException;
    +import java.io.IOException;
    +import java.util.Collection;
    +import java.util.Iterator;
    +import java.util.LinkedList;
    +
    +import org.apache.flink.api.common.functions.MapFunction;
    +import org.apache.flink.api.java.DataSet;
    +import org.apache.flink.api.java.ExecutionEnvironment;
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.flink.configuration.Configuration;
    +import org.apache.flink.hadoopcompatibility.mapred.HadoopReduceCombineFunction;
    +import org.apache.flink.hadoopcompatibility.mapred.HadoopReduceFunction;
    +import org.apache.flink.test.util.JavaProgramTestBase;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.OutputCollector;
    +import org.apache.hadoop.mapred.Reducer;
    +import org.apache.hadoop.mapred.Reporter;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.Parameterized;
    +import org.junit.runners.Parameterized.Parameters;
    +
    +@RunWith(Parameterized.class)
    +public class HadoopReduceCombineFunctionITCase extends JavaProgramTestBase {
    +
    +	private static int NUM_PROGRAMS = 4;
    +	
    +	private int curProgId = config.getInteger("ProgramId", -1);
    +	private String resultPath;
    +	private String expectedResult;
    +	
    +	public HadoopReduceCombineFunctionITCase(Configuration config) {
    +		super(config);	
    +	}
    +	
    +	@Override
    +	protected void preSubmit() throws Exception {
    +		resultPath = getTempDirPath("result");
    +	}
    +
    +	@Override
    +	protected void testProgram() throws Exception {
    +		expectedResult = ReducerProgs.runProgram(curProgId, resultPath);
    +	}
    +	
    +	@Override
    +	protected void postSubmit() throws Exception {
    +		compareResultsByLinesInMemory(expectedResult, resultPath);
    +	}
    +	
    +	@Override
    +	protected boolean skipCollectionExecution() {
    --- End diff --
    
    why are you skipping here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-flink pull request: Added wrappers for Hadoop functions

Posted by rmetzger <gi...@git.apache.org>.
Github user rmetzger commented on a diff in the pull request:

    https://github.com/apache/incubator-flink/pull/143#discussion_r18539170
  
    --- Diff: flink-addons/flink-hadoop-compatibility/src/test/java/org/apache/flink/test/hadoopcompatibility/mapred/HadoopReduceCombineFunctionITCase.java ---
    @@ -0,0 +1,297 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.test.hadoopcompatibility.mapred;
    +
    +import java.io.FileNotFoundException;
    +import java.io.IOException;
    +import java.util.Collection;
    +import java.util.Iterator;
    +import java.util.LinkedList;
    +
    +import org.apache.flink.api.common.functions.MapFunction;
    +import org.apache.flink.api.java.DataSet;
    +import org.apache.flink.api.java.ExecutionEnvironment;
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.flink.configuration.Configuration;
    +import org.apache.flink.hadoopcompatibility.mapred.HadoopReduceCombineFunction;
    +import org.apache.flink.hadoopcompatibility.mapred.HadoopReduceFunction;
    +import org.apache.flink.test.util.JavaProgramTestBase;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.OutputCollector;
    +import org.apache.hadoop.mapred.Reducer;
    +import org.apache.hadoop.mapred.Reporter;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.Parameterized;
    +import org.junit.runners.Parameterized.Parameters;
    +
    +@RunWith(Parameterized.class)
    +public class HadoopReduceCombineFunctionITCase extends JavaProgramTestBase {
    +
    +	private static int NUM_PROGRAMS = 4;
    +	
    +	private int curProgId = config.getInteger("ProgramId", -1);
    +	private String resultPath;
    +	private String expectedResult;
    +	
    +	public HadoopReduceCombineFunctionITCase(Configuration config) {
    +		super(config);	
    +	}
    +	
    +	@Override
    +	protected void preSubmit() throws Exception {
    +		resultPath = getTempDirPath("result");
    +	}
    +
    +	@Override
    +	protected void testProgram() throws Exception {
    +		expectedResult = ReducerProgs.runProgram(curProgId, resultPath);
    +	}
    +	
    +	@Override
    +	protected void postSubmit() throws Exception {
    +		compareResultsByLinesInMemory(expectedResult, resultPath);
    +	}
    +	
    +	@Override
    +	protected boolean skipCollectionExecution() {
    --- End diff --
    
    Okay, I see.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-flink pull request: Added wrappers for Hadoop functions

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/incubator-flink/pull/143


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-flink pull request: Added wrappers for Hadoop functions

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/incubator-flink/pull/143#discussion_r18535782
  
    --- Diff: flink-addons/flink-hadoop-compatibility/src/test/java/org/apache/flink/test/hadoopcompatibility/mapred/HadoopReduceCombineFunctionITCase.java ---
    @@ -0,0 +1,297 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.test.hadoopcompatibility.mapred;
    +
    +import java.io.FileNotFoundException;
    +import java.io.IOException;
    +import java.util.Collection;
    +import java.util.Iterator;
    +import java.util.LinkedList;
    +
    +import org.apache.flink.api.common.functions.MapFunction;
    +import org.apache.flink.api.java.DataSet;
    +import org.apache.flink.api.java.ExecutionEnvironment;
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.flink.configuration.Configuration;
    +import org.apache.flink.hadoopcompatibility.mapred.HadoopReduceCombineFunction;
    +import org.apache.flink.hadoopcompatibility.mapred.HadoopReduceFunction;
    +import org.apache.flink.test.util.JavaProgramTestBase;
    +import org.apache.hadoop.io.IntWritable;
    +import org.apache.hadoop.io.Text;
    +import org.apache.hadoop.mapred.JobConf;
    +import org.apache.hadoop.mapred.OutputCollector;
    +import org.apache.hadoop.mapred.Reducer;
    +import org.apache.hadoop.mapred.Reporter;
    +import org.junit.runner.RunWith;
    +import org.junit.runners.Parameterized;
    +import org.junit.runners.Parameterized.Parameters;
    +
    +@RunWith(Parameterized.class)
    +public class HadoopReduceCombineFunctionITCase extends JavaProgramTestBase {
    +
    +	private static int NUM_PROGRAMS = 4;
    +	
    +	private int curProgId = config.getInteger("ProgramId", -1);
    +	private String resultPath;
    +	private String expectedResult;
    +	
    +	public HadoopReduceCombineFunctionITCase(Configuration config) {
    +		super(config);	
    +	}
    +	
    +	@Override
    +	protected void preSubmit() throws Exception {
    +		resultPath = getTempDirPath("result");
    +	}
    +
    +	@Override
    +	protected void testProgram() throws Exception {
    +		expectedResult = ReducerProgs.runProgram(curProgId, resultPath);
    +	}
    +	
    +	@Override
    +	protected void postSubmit() throws Exception {
    +		compareResultsByLinesInMemory(expectedResult, resultPath);
    +	}
    +	
    +	@Override
    +	protected boolean skipCollectionExecution() {
    --- End diff --
    
    That test checks if the combiner is correctly executed and will fail if not.
    The collection-based execution does not run a combiner, because it does only make sense in a distributed setting to reduce the amount of shipped data.
    Hence, the test fails if run on collections.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] incubator-flink pull request: Added wrappers for Hadoop functions

Posted by rmetzger <gi...@git.apache.org>.
Github user rmetzger commented on the pull request:

    https://github.com/apache/incubator-flink/pull/143#issuecomment-58219696
  
    Very nice. With tests and documentation.
    I have one open question, otherwise, its good to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---