You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "YangY (JIRA)" <ji...@apache.org> on 2018/12/12 07:58:00 UTC

[jira] [Comment Edited] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation

    [ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16718572#comment-16718572 ] 

YangY edited comment on HADOOP-15616 at 12/12/18 7:57 AM:
----------------------------------------------------------

Thanks [~xyao] for comment on this code.

Here are the answers to your comments:

1. Changes under hadoop-tools/hadoop-aliyun unrelated to this patch.
 This may be a misoperation when formatting my code, and the error has been corrected in the new path.

2. Should we put hadoop-cos under hadoop-tools project like s3a, adsl, etc. instead of hadoop-cloud-storage-project?
 At first, I also thought I should put it under the hadoop-tools project. However, as steve's comment above, using "hadoop-cloud-storage-project" seems more appropriate,isn't it?

3. More description to SecretKey -> SecretID.
 Thank you for your reminder, I will add a detailed description for getting SecretKey and SecretID in our document.

4. BufferPool.java: since it sets the disk buffer file delete on exist, does it support recovery if client restart?
 BufferPool is a shared buffer pool. It initially provides two buffer types: Memory and Disk. The latter uses the memory file mapping to construct a byte buffer that can be used by other classes uniformly.
 Therefore, it can not support recovery if client restart. After all, the disk buffer is mapped a temporal file, and it will be cleaned up automatically when the Java Virtual Machine exists.
 
 In the latest patch, I further optimize it by combining two buffer types: memory usage and buffer performance. For this reason, the type of buffers here will not be visible to the user.

Finally, I look forward to your more comments.


was (Author: yuyang733):
Thanks [~xyao] for comment on this code.

Here are the answers to your comments:

1. Changes under hadoop-tools/hadoop-aliyun unrelated to this patch.
    This may be a misoperation when formatting my code, and the error has been corrected in the new path.

2. Should we put hadoop-cos under hadoop-tools project like s3a, adsl, etc. instead of hadoop-cloud-storage-project?
    At first, I also thought I should put it under the hadoop-tools project. However, as steve's comment above, using "hadoop-cloud-storage-project" seems more appropriate,isn't it?

3. More description to SecretKey -> SecretID.
    Thank you for your reminder, I will add a detailed description for getting SecretKey and SecretID in our document.

4. BufferPool.java: since it sets the disk buffer file delete on exist, does it support recovery if client restart?
    BufferPool is a shared buffer pool. It initially provides two buffer types:  Memory and Disk. The latter uses the memory file mapping to construct a byte buffer that can be used by other classes uniformly.
    Therefore, it can not support recovery if client restart. After all, the disk buffer is mapped a temporal file, and it will be cleaned up automatically when the Java Virtual Machine exists.


Finally, I look forward to your more comments.


> Incorporate Tencent Cloud COS File System Implementation
> --------------------------------------------------------
>
>                 Key: HADOOP-15616
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15616
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: fs/cos
>            Reporter: Junping Du
>            Assignee: YangY
>            Priority: Major
>         Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, Tencent-COS-Integrated.pdf
>
>
> Tencent cloud is top 2 cloud vendors in China market and the object store COS ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s cloud users but now it is hard for hadoop user to access data laid on COS storage as no native support for COS in Hadoop.
> This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just like what we do before for S3, ADL, OSS, etc. With simple configuration, Hadoop applications can read/write data from COS without any code change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org