You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "David Dobbins (JIRA)" <ji...@apache.org> on 2013/11/28 13:31:36 UTC

[jira] [Updated] (HADOOP-10135) writes to swift fs over partition size leave temp files and empty output file

     [ https://issues.apache.org/jira/browse/HADOOP-10135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

David Dobbins updated HADOOP-10135:
-----------------------------------

    Description: 
The OpenStack/swift filesystem produces incorrect output when the written objects exceed the configured partition size. After job completion, the expected files in the swift container have length == 0 and a collection of temporary files remain with names that appear to be URLs.

This can be replicated with teragen against the minicluster using the following command line:
bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar teragen 100000 swift://mycontainer.myservice/teradata

Where core-site.xml contains:
  <property>
    <name>fs.swift.impl</name>
    <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
  </property>
  <property>
    <name>fs.swift.partsize</name>
    <value>1024</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.auth.url</name>
    <value>https://auth.api.rackspacecloud.com/v2.0/tokens</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.username</name>
    <value>[[your-cloud-username]]</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.region</name>
    <value>DFW</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.apikey</name>
    <value>[[your-api-key]]</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.public</name>
    <value>true</value>
  </property>

Container "mycontainer" should have a collection of objects with names starting with "teradata/part-m-00000".  Instead, that file is empty and there is a collection of objects with names like "swift://mycontainer.myservice/teradata/_temporary/0/_temporary/attempt_local415043862_0001_m_000000_0/part-m-00000/000010"

  was:
The OpenStack/swift filesystem produces incorrect output when the written objects exceed the configured partition size. After job completion, the expected files in the swift container have length == 0 and a collection of temporary files remain with names that appear to be URLs.

This can be replicated with teragen against the minicluster using the following command line:
bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar teragen 100000 swift://mycontainer.myservice/teradata

Where core-site.xml contains:
  <property>
    <name>fs.swift.impl</name>
    <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.auth.url</name>
    <value>https://auth.api.rackspacecloud.com/v2.0/tokens</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.username</name>
    <value>[[your-cloud-username]]</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.region</name>
    <value>DFW</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.apikey</name>
    <value>[[your-api-key]]</value>
  </property>
  <property>
    <name>fs.swift.service.myservice.public</name>
    <value>true</value>
  </property>

Container "mycontainer" should have a collection of objects with names starting with "teradata/part-m-00000".  Instead, that file is empty and there is a collection of objects with names like "swift://mycontainer.myservice/teradata/_temporary/0/_temporary/attempt_local415043862_0001_m_000000_0/part-m-00000/000010"


> writes to swift fs over partition size leave temp files and empty output file
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-10135
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10135
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 3.0.0
>            Reporter: David Dobbins
>
> The OpenStack/swift filesystem produces incorrect output when the written objects exceed the configured partition size. After job completion, the expected files in the swift container have length == 0 and a collection of temporary files remain with names that appear to be URLs.
> This can be replicated with teragen against the minicluster using the following command line:
> bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar teragen 100000 swift://mycontainer.myservice/teradata
> Where core-site.xml contains:
>   <property>
>     <name>fs.swift.impl</name>
>     <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
>   </property>
>   <property>
>     <name>fs.swift.partsize</name>
>     <value>1024</value>
>   </property>
>   <property>
>     <name>fs.swift.service.myservice.auth.url</name>
>     <value>https://auth.api.rackspacecloud.com/v2.0/tokens</value>
>   </property>
>   <property>
>     <name>fs.swift.service.myservice.username</name>
>     <value>[[your-cloud-username]]</value>
>   </property>
>   <property>
>     <name>fs.swift.service.myservice.region</name>
>     <value>DFW</value>
>   </property>
>   <property>
>     <name>fs.swift.service.myservice.apikey</name>
>     <value>[[your-api-key]]</value>
>   </property>
>   <property>
>     <name>fs.swift.service.myservice.public</name>
>     <value>true</value>
>   </property>
> Container "mycontainer" should have a collection of objects with names starting with "teradata/part-m-00000".  Instead, that file is empty and there is a collection of objects with names like "swift://mycontainer.myservice/teradata/_temporary/0/_temporary/attempt_local415043862_0001_m_000000_0/part-m-00000/000010"



--
This message was sent by Atlassian JIRA
(v6.1#6144)