You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by ajaybhat <gi...@git.apache.org> on 2016/01/06 10:34:11 UTC

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

GitHub user ajaybhat opened a pull request:

    https://github.com/apache/flink/pull/1486

    [FLINK-2445] Add tests for HadoopOutputFormats

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ajaybhat/flink test-fix

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/1486.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1486
    
----
commit be59077d5f5df099c1792329631c08441cc25345
Author: Ajay Bhat <aj...@vmware.com>
Date:   2016-01-06T08:56:46Z

    [FLINK-2445] Add tests for HadoopOutputFormats

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by mliesenberg <gi...@git.apache.org>.
Github user mliesenberg commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-182923108
  
    As there has not been any activity since the beginning of the year, I thought I'd address the comment in this PR. I wasnt able to push to the current branch, so I made a branch in my repo. 
    you can find it [here](https://github.com/mliesenberg/flink/tree/FLINK-2445-tests)
    
    let me know how you would like to proceed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1486#discussion_r49085273
  
    --- Diff: flink-java/src/test/java/org/apache/flink/api/java/hadoop/mapreduce/HadoopOutputFormatTest.java ---
    @@ -0,0 +1,130 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.api.java.hadoop.mapreduce;
    +
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.mapreduce.*;
    +import org.junit.Test;
    +import org.mockito.Mockito;
    +
    +import java.io.IOException;
    +import java.util.HashMap;
    +import java.util.Map;
    +
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.fail;
    +
    +public class HadoopOutputFormatTest {
    +
    +    private static final String PATH = "an/ignored/file/";
    +    private Map<String, Long> map;
    +
    +    @Test
    +    public void testWriteRecord() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        String key = "Test";
    +        Long value = 1L;
    +        map = new HashMap<>();
    +        map.put(key, 0L);
    +        try {
    +            Job job = Job.getInstance();
    +            Tuple2<String, Long> tuple = new Tuple2<>();
    +            tuple.setFields(key, value);
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +            hadoopOutputFormat.writeRecord(tuple);
    +
    +            Long expected = map.get(key);
    +            assertEquals(expected, value);
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +    @Test
    +    public void testOpen() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        try {
    +            Job job = Job.getInstance();
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +            hadoopOutputFormat.open(1, 4);
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +    @Test
    +    public void testClose() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        try {
    +            Job job = Job.getInstance();
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +
    +            final OutputCommitter outputCommitter = Mockito.mock(OutputCommitter.class);
    +            Mockito.when(outputCommitter.needsTaskCommit(Mockito.any(TaskAttemptContext.class))).thenReturn(true);
    +            Mockito.doNothing().when(outputCommitter).commitTask(Mockito.any(TaskAttemptContext.class));
    --- End diff --
    
    Does this check that the `commitTask` method is actually called? 
    It would be good to check as well that it is *not* called if `needsTaskCommit` returns `false`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1486#discussion_r49084974
  
    --- Diff: flink-java/src/test/java/org/apache/flink/api/java/hadoop/mapreduce/HadoopOutputFormatTest.java ---
    @@ -0,0 +1,130 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.api.java.hadoop.mapreduce;
    +
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.mapreduce.*;
    +import org.junit.Test;
    +import org.mockito.Mockito;
    +
    +import java.io.IOException;
    +import java.util.HashMap;
    +import java.util.Map;
    +
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.fail;
    +
    +public class HadoopOutputFormatTest {
    +
    +    private static final String PATH = "an/ignored/file/";
    +    private Map<String, Long> map;
    +
    +    @Test
    +    public void testWriteRecord() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        String key = "Test";
    +        Long value = 1L;
    +        map = new HashMap<>();
    +        map.put(key, 0L);
    +        try {
    +            Job job = Job.getInstance();
    +            Tuple2<String, Long> tuple = new Tuple2<>();
    +            tuple.setFields(key, value);
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +            hadoopOutputFormat.writeRecord(tuple);
    +
    +            Long expected = map.get(key);
    +            assertEquals(expected, value);
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +    @Test
    +    public void testOpen() {
    --- End diff --
    
    This method does not seem to test anything (except that no exception is thrown). 
    Please add checks that the following methods are called:
    - `setupJob()` method of the OutputCommitter
    - `getRecordWriter()` method of the MapReduce OutputFormat


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-200426694
  
    Closed this PR as part of PR #1628. 
    @ajaybhat, your commit was added as 8e4a001b84c36827eb6168adac2724f76e30cc2c
    Thanks for the contribution.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by ajaybhat <gi...@git.apache.org>.
Github user ajaybhat commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-169543630
  
    The build has failed, but the failure seems unrelated to this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by aljoscha <gi...@git.apache.org>.
Github user aljoscha commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-176839542
  
    @ajaybhat Do you have any updates on this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by StephanEwen <gi...@git.apache.org>.
Github user StephanEwen commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-182957754
  
    @mliesenberg Thanks for picking this up. Let's close this PR. Would be great if you could open a new PR based on the branch you linked.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1486#discussion_r49085951
  
    --- Diff: flink-java/src/test/java/org/apache/flink/api/java/hadoop/mapreduce/HadoopOutputFormatTest.java ---
    @@ -0,0 +1,130 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.api.java.hadoop.mapreduce;
    +
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.mapreduce.*;
    +import org.junit.Test;
    +import org.mockito.Mockito;
    +
    +import java.io.IOException;
    +import java.util.HashMap;
    +import java.util.Map;
    +
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.fail;
    +
    +public class HadoopOutputFormatTest {
    --- End diff --
    
    Can you add two more test methods?
    
    Test for `configure()`:
    - check that `setConf()` is called if the MapReduce OutputFormat implements `Configurable`
    
    Test for `finalizeGlobal`:
    - Check that `commitJob()` is called on the `OutputCommitter`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1486#discussion_r49085358
  
    --- Diff: flink-java/src/test/java/org/apache/flink/api/java/hadoop/mapreduce/HadoopOutputFormatTest.java ---
    @@ -0,0 +1,130 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.api.java.hadoop.mapreduce;
    +
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.mapreduce.*;
    +import org.junit.Test;
    +import org.mockito.Mockito;
    +
    +import java.io.IOException;
    +import java.util.HashMap;
    +import java.util.Map;
    +
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.fail;
    +
    +public class HadoopOutputFormatTest {
    +
    +    private static final String PATH = "an/ignored/file/";
    +    private Map<String, Long> map;
    +
    +    @Test
    +    public void testWriteRecord() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        String key = "Test";
    +        Long value = 1L;
    +        map = new HashMap<>();
    +        map.put(key, 0L);
    +        try {
    +            Job job = Job.getInstance();
    +            Tuple2<String, Long> tuple = new Tuple2<>();
    +            tuple.setFields(key, value);
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +            hadoopOutputFormat.writeRecord(tuple);
    +
    +            Long expected = map.get(key);
    +            assertEquals(expected, value);
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +    @Test
    +    public void testOpen() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        try {
    +            Job job = Job.getInstance();
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +            hadoopOutputFormat.open(1, 4);
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +    @Test
    +    public void testClose() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        try {
    +            Job job = Job.getInstance();
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +
    +            final OutputCommitter outputCommitter = Mockito.mock(OutputCommitter.class);
    +            Mockito.when(outputCommitter.needsTaskCommit(Mockito.any(TaskAttemptContext.class))).thenReturn(true);
    +            Mockito.doNothing().when(outputCommitter).commitTask(Mockito.any(TaskAttemptContext.class));
    +            hadoopOutputFormat.outputCommitter = outputCommitter;
    +            hadoopOutputFormat.configuration = new Configuration();
    +            hadoopOutputFormat.configuration.set("mapred.output.dir", PATH);
    +
    +            hadoopOutputFormat.close();
    --- End diff --
    
    Please add a check that the `close` method is called on the `RecordWriter`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-169700059
  
    Thanks for the PR. This is a good start. The tests needs a few more checks though. 
    The main purpose of the test is to ensure that the correct methods are called on the Hadoop classes (OutputFormat, OutputCommitter, RecordWriter, ...). I added a few comment for checks to add in line.
    
    Thanks, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by rmetzger <gi...@git.apache.org>.
Github user rmetzger commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1486#discussion_r49071392
  
    --- Diff: flink-java/src/test/java/org/apache/flink/api/java/hadoop/mapreduce/HadoopOutputFormatTest.java ---
    @@ -0,0 +1,130 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.api.java.hadoop.mapreduce;
    +
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.mapreduce.*;
    +import org.junit.Test;
    +import org.mockito.Mockito;
    +
    +import java.io.IOException;
    +import java.util.HashMap;
    +import java.util.Map;
    +
    +import static org.junit.Assert.assertEquals;
    +import static org.junit.Assert.fail;
    +
    +public class HadoopOutputFormatTest {
    +
    +    private static final String PATH = "an/ignored/file/";
    +    private Map<String, Long> map;
    +
    +    @Test
    +    public void testWriteRecord() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        String key = "Test";
    +        Long value = 1L;
    +        map = new HashMap<>();
    +        map.put(key, 0L);
    +        try {
    +            Job job = Job.getInstance();
    +            Tuple2<String, Long> tuple = new Tuple2<>();
    +            tuple.setFields(key, value);
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +            hadoopOutputFormat.writeRecord(tuple);
    +
    +            Long expected = map.get(key);
    +            assertEquals(expected, value);
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +    @Test
    +    public void testOpen() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        try {
    +            Job job = Job.getInstance();
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +            hadoopOutputFormat.open(1, 4);
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +    @Test
    +    public void testClose() {
    +        OutputFormat<String, Long> dummyOutputFormat = new DummyOutputFormat();
    +        try {
    +            Job job = Job.getInstance();
    +            HadoopOutputFormat<String, Long> hadoopOutputFormat = new HadoopOutputFormat<>(dummyOutputFormat, job);
    +
    +            hadoopOutputFormat.recordWriter = new DummyRecordWriter();
    +
    +            final OutputCommitter outputCommitter = Mockito.mock(OutputCommitter.class);
    +            Mockito.when(outputCommitter.needsTaskCommit(Mockito.any(TaskAttemptContext.class))).thenReturn(true);
    +            Mockito.doNothing().when(outputCommitter).commitTask(Mockito.any(TaskAttemptContext.class));
    +            hadoopOutputFormat.outputCommitter = outputCommitter;
    +            hadoopOutputFormat.configuration = new Configuration();
    +            hadoopOutputFormat.configuration.set("mapred.output.dir", PATH);
    +
    +            hadoopOutputFormat.close();
    +        } catch (IOException e) {
    +            fail();
    +        }
    +    }
    +
    +
    +    class DummyRecordWriter extends RecordWriter<String, Long> {
    +        @Override
    +        public void write(String key, Long value) throws IOException, InterruptedException {
    +            map.put(key, value);
    +        }
    +
    +        @Override
    +        public void close(TaskAttemptContext context) throws IOException, InterruptedException {
    --- End diff --
    
    Can you test also that close gets called properly?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by mliesenberg <gi...@git.apache.org>.
Github user mliesenberg commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-182980098
  
    sure, though I cant close it. 
    Found some things that can still be improved, will add that and open a PR tonight.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/flink/pull/1486


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by ajaybhat <gi...@git.apache.org>.
Github user ajaybhat commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-169292981
  
    Hi @chiwanpark 
    > Could you re-implement DummyRecordWriter to make tests fast?
    
    Its because I saw many tests creating temporary files that I used the same approach.
    > By the way, the author email of your commit (be59077) seems different email address of your github account. Is this intentional?
    
    Its a mistake, I'll fix it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request: [FLINK-2445] Add tests for HadoopOutputFormats

Posted by chiwanpark <gi...@git.apache.org>.
Github user chiwanpark commented on the pull request:

    https://github.com/apache/flink/pull/1486#issuecomment-169283768
  
    Hi @ajaybhat, thanks for opening pull request.
    
    I read your pull request quickly and have a comment.
    
    Currently, Flink has a lot of tests that create a temporary file and write temporary result to the file. It makes time to test slow. So Flink community decided to change these tests to avoid temporary file.
    
    Could you re-implement  `DummyRecordWriter` to make tests fast?
    
    By the way, the author email of your commit (be59077) seems different email address of your github account. Is this intentional?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---