You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "David S. Wang (JIRA)" <ji...@apache.org> on 2014/09/09 22:56:30 UTC

[jira] [Updated] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

     [ https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

David S. Wang updated HADOOP-11074:
-----------------------------------
    Attachment: HADOOP-11074.patch

This patch does the following:

* Move the s3 and s3native FS connector code from hadoop-common to hadoop-aws.
* Add dependencies into pom files to reflect the move.
* Get rid of dependency on auth-keys.xml for s3 tests, as it's not used.
* Remove references to the moved code from the META-INF services code so that unit tests run from hadoop-common don't try to use the moved code.  Similarly, add the same references into hadoop-aws so that tests know to use the moved code.

I ran "mvn test" from hadoop-tools/hadoop-aws and verified that all of the tests
 (including the contract ones) were run.  I tcpdump'ed to make sure there was ac
tual network traffic to/from the s3 server - the tests themselves seem to run to
o fast for me to capture a snapshot of the temporary test files existing in my s
3 bucket.  I also ran the contract-related unit tests from the root directory and they passed as well.

Thanks to Juan Yu for a good amount of help navigating the HADOOP-9361 setup, providing some suggestions about dependency changes, and verifying the patch.

For future reference, I added the following files in order to run the tests (not checked in):

hadoop-tools/hadoop-aws/src/test/resources/contract-test-options.xml:
<configuration>
  <property>
    <name>fs.contract.test.fs.s3</name>
    <value>s3://(bucket name)/(test directory)</value>
  </property>
  <property>
    <name>fs.s3.awsAccessKeyId</name>
    <value>(AWS access key)</value>
  </property>
  <property>
    <name>fs.s3.awsSecretAccessKey</name>
    <value>(AWS secret access key)</value>
  </property>
  <property>
    <name>fs.contract.test.fs.s3n</name>
    <value>s3n://(bucket name)/(test directory)</value>
  </property>
  <property>
    <name>fs.s3n.awsAccessKeyId</name>
    <value>(AWS access key)</value>
  </property>
  <property>
    <name>fs.s3n.awsSecretAccessKey</name>
    <value>(AWS secret access key)</value>
  </property>
</configuration>

hadoop-tools/hadoop-aws/src/test/resources/core-site.xml:
<configuration>
  <property>
    <name>test.fs.s3.name</name>
    <value>s3://(bucket name)/(test directory)</value>
  </property>
  <property>
    <name>test.fs.s3n.name</name>
    <value>s3n://(bucket name)/(test directory)</value>
  </property>
</configuration>

> Move s3-related FS connector code to hadoop-aws
> -----------------------------------------------
>
>                 Key: HADOOP-11074
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11074
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.0.0
>            Reporter: David S. Wang
>            Assignee: David S. Wang
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-11074.patch
>
>
> Now that hadoop-aws has been created, we should actually move the relevant code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)