You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by GitBox <gi...@apache.org> on 2022/06/20 12:36:09 UTC

[GitHub] [doris] platoneko opened a new pull request, #10280: [Feature] Merge cold_on_s3 into master

platoneko opened a new pull request, #10280:
URL: https://github.com/apache/doris/pull/10280

   # Proposed changes
   
   Issue Number: close #xxx
   
   ## Problem Summary:
   
   Describe the overview of changes.
   
   ## Checklist(Required)
   
   1. Does it affect the original behavior: (Yes/No/I Don't know)
   2. Has unit tests been added: (Yes/No/No Need)
   3. Has document been added or modified: (Yes/No/No Need)
   4. Does it need to update dependencies: (Yes/No)
   5. Are there any changes that cannot be rolled back: (Yes/No)
   
   ## Further comments
   
   If this is a relatively large or complex change, kick off the discussion at [dev@doris.apache.org](mailto:dev@doris.apache.org) by explaining why you chose the solution you did and what alternatives you considered, etc...
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] pengxiangyu commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
pengxiangyu commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r909168589


##########
be/src/agent/task_worker_pool.cpp:
##########
@@ -1683,9 +1701,64 @@ void TaskWorkerPool::_submit_table_compaction_worker_thread_callback() {
     }
 }
 
-void TaskWorkerPool::_storage_medium_migrate_v2_worker_thread_callback() {
+void TaskWorkerPool::_storage_refresh_storage_policy_worker_thread_callback() {

Review Comment:
   why do you delete _storage_medium_migrate_v2_worker_thread_callback?do you want to implement it in BE?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] pengxiangyu commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
pengxiangyu commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r911892938


##########
be/src/agent/agent_server.cpp:
##########
@@ -153,8 +154,8 @@ void AgentServer::submit_tasks(TAgentResult& agent_result,
             HANDLE_TYPE(TTaskType::UPDATE_TABLET_META_INFO, _update_tablet_meta_info_workers,
                         update_tablet_meta_info_req);
             HANDLE_TYPE(TTaskType::COMPACTION, _submit_table_compaction_workers, compaction_req);
-            HANDLE_TYPE(TTaskType::STORAGE_MEDIUM_MIGRATE_V2, _storage_medium_migrate_v2_workers,
-                        storage_migration_req_v2);

Review Comment:
   where is _storage_refresh_policy_workers? is it not needed here?



##########
be/src/agent/task_worker_pool.cpp:
##########
@@ -1683,9 +1701,64 @@ void TaskWorkerPool::_submit_table_compaction_worker_thread_callback() {
     }
 }
 
-void TaskWorkerPool::_storage_medium_migrate_v2_worker_thread_callback() {
+void TaskWorkerPool::_storage_refresh_storage_policy_worker_thread_callback() {
+    while (_is_work) {
+        _is_doing_work = false;
+        // wait at most report_task_interval_seconds, or being notified
+        std::unique_lock<std::mutex> worker_thread_lock(_worker_thread_lock);
+        _worker_thread_condition_variable.wait_for(
+                worker_thread_lock,
+                std::chrono::seconds(config::storage_refresh_storage_policy_task_interval_seconds));
+        if (!_is_work) {
+            break;
+        }
+
+        if (_master_info.network_address.port == 0) {
+            // port == 0 means not received heartbeat yet
+            // sleep a short time and try again
+            LOG(INFO)
+                    << "waiting to receive first heartbeat from frontend before doing task report";
+            continue;
+        }
+
+        _is_doing_work = true;
+
+        TGetStoragePolicyResult result;
+        Status status = _master_client->refresh_storage_policy(&result);
+        if (!status.ok()) {
+            LOG(WARNING) << "refresh storage policy status not ok. ";
+        } else if (result.status.status_code != TStatusCode::OK) {
+            LOG(WARNING) << "refresh storage policy result status status_code not ok. ";

Review Comment:
   unexpect space after 'status_code not ok.'



##########
be/src/olap/rowset/beta_rowset_writer.cpp:
##########
@@ -304,29 +270,28 @@ RowsetSharedPtr BetaRowsetWriter::build() {
 
 Status BetaRowsetWriter::_create_segment_writer(
         std::unique_ptr<segment_v2::SegmentWriter>* writer) {
-    auto path_desc =
-            BetaRowset::segment_file_path(_context.path_desc, _context.rowset_id, _num_segment++);
-    // TODO(lingbin): should use a more general way to get BlockManager object
-    // and tablets with the same type should share one BlockManager object;
-    fs::BlockManager* block_mgr = fs::fs_util::block_manager(_context.path_desc);
-    std::unique_ptr<fs::WritableBlock> wblock;
-    fs::CreateBlockOptions opts(path_desc);
-    DCHECK(block_mgr != nullptr);
-    Status st = block_mgr->create_block(opts, &wblock);
+    auto path = BetaRowset::local_segment_path(_context.tablet_path, _context.rowset_id,
+                                               _num_segment++);
+    auto fs = _rowset_meta->fs();
+    if (!fs) {

Review Comment:
   ERROR LOG needed.



##########
be/src/olap/base_compaction.cpp:
##########
@@ -106,14 +106,11 @@ Status BaseCompaction::pick_rowsets_to_compact() {
     }
 
     // 2. the ratio between base rowset and all input cumulative rowsets reaches the threshold
-    int64_t base_size = 0;
+    // `_input_rowsets` has been sorted by end version, so we consider `_input_rowsets[0]` is the base rowset.
+    int64_t base_size = _input_rowsets.front()->data_disk_size();
     int64_t cumulative_total_size = 0;
-    for (auto& rowset : _input_rowsets) {
-        if (rowset->start_version() != 0) {
-            cumulative_total_size += rowset->data_disk_size();
-        } else {
-            base_size = rowset->data_disk_size();
-        }
+    for (auto it = _input_rowsets.begin() + 1; it != _input_rowsets.end(); ++it) {

Review Comment:
   _input_rowsets may be not in order, check it is better.



##########
be/src/olap/tablet.cpp:
##########
@@ -1636,4 +1652,160 @@ std::shared_ptr<MemTracker>& Tablet::get_compaction_mem_tracker(CompactionType c
     }
 }
 
+Status Tablet::cooldown() {
+    std::unique_lock schema_change_lock(_schema_change_lock, std::try_to_lock);
+    if (!schema_change_lock.owns_lock()) {
+        LOG(WARNING) << "schema change is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    // Check executing serially with compaction task.
+    std::unique_lock base_compaction_lock(_base_compaction_lock, std::try_to_lock);
+    if (!base_compaction_lock.owns_lock()) {
+        LOG(WARNING) << "base compaction is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    std::unique_lock cumu_compaction_lock(_cumulative_compaction_lock, std::try_to_lock);
+    if (!cumu_compaction_lock.owns_lock()) {
+        LOG(WARNING) << "cumulative compaction is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    auto dest_fs = io::FileSystemMap::instance()->get(cooldown_resource());
+    if (!dest_fs) {
+        return Status::OLAPInternalError(OLAP_ERR_NOT_INITED);
+    }
+    DCHECK(dest_fs->type() == io::FileSystemType::S3);
+    auto old_rowset = pick_cooldown_rowset();
+    if (!old_rowset) {
+        LOG(WARNING) << "Cannot pick cooldown rowset in tablet " << tablet_id();
+        return Status::OK();
+    }
+    RowsetId new_rowset_id = StorageEngine::instance()->next_rowset_id();
+
+    auto start = std::chrono::steady_clock::now();
+
+    RETURN_IF_ERROR(old_rowset->upload_to(reinterpret_cast<io::RemoteFileSystem*>(dest_fs.get()),
+                                          new_rowset_id));
+
+    auto duration = std::chrono::duration<float>(std::chrono::steady_clock::now() - start);
+    LOG(INFO) << "Upload rowset " << old_rowset->version() << " " << new_rowset_id.to_string()
+              << " to " << dest_fs->root_path().native() << ", tablet_id=" << tablet_id()
+              << ", duration=" << duration.count() << ", capacity=" << old_rowset->data_disk_size()
+              << ", tp=" << old_rowset->data_disk_size() / duration.count();
+
+    // gen a new rowset
+    auto new_rowset_meta = std::make_shared<RowsetMeta>(*old_rowset->rowset_meta());
+    new_rowset_meta->set_rowset_id(new_rowset_id);
+    new_rowset_meta->set_resource_id(dest_fs->resource_id());
+    new_rowset_meta->set_fs(dest_fs);
+    new_rowset_meta->set_creation_time(time(nullptr));
+    RowsetSharedPtr new_rowset;
+    RowsetFactory::create_rowset(&_schema, _tablet_path, std::move(new_rowset_meta), &new_rowset);
+
+    std::vector to_add {std::move(new_rowset)};
+    std::vector to_delete {std::move(old_rowset)};
+
+    std::unique_lock meta_wlock(_meta_lock);
+    modify_rowsets(to_add, to_delete);
+    save_meta();
+    return Status::OK();
+}
+
+RowsetSharedPtr Tablet::pick_cooldown_rowset() {
+    RowsetSharedPtr rowset;
+    {
+        std::shared_lock meta_rlock(_meta_lock);
+
+        // We pick the rowset with smallest start version in local.
+        int64_t smallest_version = std::numeric_limits<int64_t>::max();
+        for (const auto& it : _rs_version_map) {
+            auto& rs = it.second;
+            if (rs->is_local() && rs->start_version() < smallest_version) {
+                smallest_version = rs->start_version();
+                rowset = rs;
+            }
+        }
+    }
+    return rowset;
+}
+
+bool Tablet::need_cooldown(int64_t* cooldown_timestamp, size_t* file_size) {
+    // std::shared_lock meta_rlock(_meta_lock);
+    if (cooldown_resource().empty()) {

Review Comment:
   cooldown_resource()?  isn't it policy_name?



##########
be/src/olap/tablet.cpp:
##########
@@ -1636,4 +1652,160 @@ std::shared_ptr<MemTracker>& Tablet::get_compaction_mem_tracker(CompactionType c
     }
 }
 
+Status Tablet::cooldown() {
+    std::unique_lock schema_change_lock(_schema_change_lock, std::try_to_lock);
+    if (!schema_change_lock.owns_lock()) {
+        LOG(WARNING) << "schema change is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    // Check executing serially with compaction task.
+    std::unique_lock base_compaction_lock(_base_compaction_lock, std::try_to_lock);
+    if (!base_compaction_lock.owns_lock()) {
+        LOG(WARNING) << "base compaction is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    std::unique_lock cumu_compaction_lock(_cumulative_compaction_lock, std::try_to_lock);
+    if (!cumu_compaction_lock.owns_lock()) {
+        LOG(WARNING) << "cumulative compaction is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    auto dest_fs = io::FileSystemMap::instance()->get(cooldown_resource());
+    if (!dest_fs) {
+        return Status::OLAPInternalError(OLAP_ERR_NOT_INITED);
+    }
+    DCHECK(dest_fs->type() == io::FileSystemType::S3);
+    auto old_rowset = pick_cooldown_rowset();

Review Comment:
   why only one rowset? how abort the others?



##########
be/src/olap/tablet_manager.cpp:
##########
@@ -1345,18 +1271,55 @@ void TabletManager::get_tablets_distribution_on_different_disks(
 
 Status TabletManager::_get_storage_param(DataDir* data_dir, const std::string& storage_name,
                                          StorageParamPB* storage_param) {
-    if (data_dir->is_remote()) {
-        RETURN_WITH_WARN_IF_ERROR(
-                StorageBackendMgr::instance()->get_storage_param(storage_name, storage_param),
-                Status::OLAPInternalError(OLAP_ERR_OTHER_ERROR),
-                "get_storage_param failed for storage_name: " + storage_name);
-    } else {
-        storage_param->set_storage_medium(
-                fs::fs_util::get_storage_medium_pb(data_dir->storage_medium()));
-    }
+    storage_param->set_storage_medium(
+            fs::fs_util::get_storage_medium_pb(data_dir->storage_medium()));
     return Status::OK();
 }
 
+struct SortCtx {
+    SortCtx(TabletSharedPtr tablet, int64_t cooldown_timestamp, int64_t file_size)
+            : tablet(tablet), cooldown_timestamp(cooldown_timestamp), file_size(file_size) {}
+    TabletSharedPtr tablet;
+    int64_t cooldown_timestamp;
+    int64_t file_size;
+};
+
+void TabletManager::get_cooldwon_tablets(std::vector<TabletSharedPtr>* tablets) {

Review Comment:
   cooldown, misspelling



##########
be/src/olap/storage_policy_mgr.cpp:
##########
@@ -0,0 +1,91 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#include "olap/storage_policy_mgr.h"
+
+#include "io/fs/file_system.h"
+#include "io/fs/file_system_map.h"
+#include "io/fs/s3_file_system.h"
+#include "util/s3_util.h"
+
+namespace doris {
+
+void StoragePolicyMgr::update(const std::string& name, StoragePolicyPtr policy) {
+    std::lock_guard<std::mutex> l(_mutex);
+    auto it = _policy_map.find(name);
+    if (it != _policy_map.end()) {
+        // just support change ak, sk, cooldown_ttl, cooldown_datetime
+        LOG(INFO) << "update storage policy name: " << name;
+        auto s3_fs = std::dynamic_pointer_cast<io::S3FileSystem>(
+                io::FileSystemMap::instance()->get(name));
+        DCHECK(s3_fs);
+        s3_fs->set_ak(policy->s3_ak);
+        s3_fs->set_sk(policy->s3_sk);
+        s3_fs->connect();
+        it->second = std::move(policy);
+    } else {
+        // can't find name's policy, so do nothing.
+    }
+}
+
+void StoragePolicyMgr::periodic_put(const std::string& name, StoragePolicyPtr policy) {
+    std::lock_guard<std::mutex> l(_mutex);
+    auto it = _policy_map.find(name);
+    if (it == _policy_map.end()) {

Review Comment:
   maybe remote is not s3, perhaps hdfs. check resource type is better.



##########
be/src/agent/task_worker_pool.cpp:
##########
@@ -437,6 +443,11 @@ void TaskWorkerPool::_drop_tablet_worker_thread_callback() {
             StorageEngine::instance()->txn_manager()->force_rollback_tablet_related_txns(
                     dropped_tablet->data_dir()->get_meta(), drop_tablet_req.tablet_id,
                     drop_tablet_req.schema_hash, dropped_tablet->tablet_uid());
+            // We remove remote rowset directly.
+            // TODO(cyx): do remove in background
+            if (drop_tablet_req.is_drop_table_or_partition) {
+                dropped_tablet->remove_all_remote_rowsets();

Review Comment:
   no return checker? how it will be when remove_all_remote_rowsets() is failed?



##########
be/src/agent/task_worker_pool.cpp:
##########
@@ -1683,9 +1701,64 @@ void TaskWorkerPool::_submit_table_compaction_worker_thread_callback() {
     }
 }
 
-void TaskWorkerPool::_storage_medium_migrate_v2_worker_thread_callback() {
+void TaskWorkerPool::_storage_refresh_storage_policy_worker_thread_callback() {
+    while (_is_work) {
+        _is_doing_work = false;
+        // wait at most report_task_interval_seconds, or being notified
+        std::unique_lock<std::mutex> worker_thread_lock(_worker_thread_lock);
+        _worker_thread_condition_variable.wait_for(
+                worker_thread_lock,
+                std::chrono::seconds(config::storage_refresh_storage_policy_task_interval_seconds));
+        if (!_is_work) {
+            break;
+        }
+
+        if (_master_info.network_address.port == 0) {
+            // port == 0 means not received heartbeat yet
+            // sleep a short time and try again
+            LOG(INFO)
+                    << "waiting to receive first heartbeat from frontend before doing task report";
+            continue;
+        }
+
+        _is_doing_work = true;
+
+        TGetStoragePolicyResult result;
+        Status status = _master_client->refresh_storage_policy(&result);
+        if (!status.ok()) {
+            LOG(WARNING) << "refresh storage policy status not ok. ";
+        } else if (result.status.status_code != TStatusCode::OK) {
+            LOG(WARNING) << "refresh storage policy result status status_code not ok. ";
+        } else {
+            // update storage policy mgr.
+            StoragePolicyMgr* spm = ExecEnv::GetInstance()->storage_policy_mgr();
+            for (const auto& iter : result.result_entrys) {
+                shared_ptr<StoragePolicy> policy_ptr = make_shared<StoragePolicy>();
+                policy_ptr->storage_policy_name = iter.policy_name;
+                policy_ptr->cooldown_datetime = iter.cooldown_datetime;
+                policy_ptr->cooldown_ttl = iter.cooldown_ttl;
+                policy_ptr->s3_endpoint = iter.s3_storage_param.s3_endpoint;
+                policy_ptr->s3_region = iter.s3_storage_param.s3_region;
+                policy_ptr->s3_ak = iter.s3_storage_param.s3_ak;
+                policy_ptr->s3_sk = iter.s3_storage_param.s3_sk;
+                policy_ptr->root_path = iter.s3_storage_param.root_path;
+                policy_ptr->bucket = iter.s3_storage_param.bucket;
+                policy_ptr->s3_conn_timeout_ms = iter.s3_storage_param.s3_conn_timeout_ms;
+                policy_ptr->s3_max_conn = iter.s3_storage_param.s3_max_conn;
+                policy_ptr->s3_request_timeout_ms = iter.s3_storage_param.s3_request_timeout_ms;
+                policy_ptr->md5_sum = iter.md5_checksum;
+
+                LOG(INFO) << "refresh storage policy task, policy " << *policy_ptr;
+                spm->periodic_put(iter.policy_name, std::move(policy_ptr));
+            }
+        }
+    }
+}
+
+void TaskWorkerPool::_storage_update_storage_policy_worker_thread_callback() {

Review Comment:
   _storage_refresh_storage_policy_worker_thread_callback() and _storage_update_storage_policy_worker_thread_callback() has too many duplicate code, create a sub function is better.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] platoneko commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
platoneko commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r911963044


##########
be/src/olap/rowset/beta_rowset_writer.cpp:
##########
@@ -304,29 +270,28 @@ RowsetSharedPtr BetaRowsetWriter::build() {
 
 Status BetaRowsetWriter::_create_segment_writer(
         std::unique_ptr<segment_v2::SegmentWriter>* writer) {
-    auto path_desc =
-            BetaRowset::segment_file_path(_context.path_desc, _context.rowset_id, _num_segment++);
-    // TODO(lingbin): should use a more general way to get BlockManager object
-    // and tablets with the same type should share one BlockManager object;
-    fs::BlockManager* block_mgr = fs::fs_util::block_manager(_context.path_desc);
-    std::unique_ptr<fs::WritableBlock> wblock;
-    fs::CreateBlockOptions opts(path_desc);
-    DCHECK(block_mgr != nullptr);
-    Status st = block_mgr->create_block(opts, &wblock);
+    auto path = BetaRowset::local_segment_path(_context.tablet_path, _context.rowset_id,
+                                               _num_segment++);
+    auto fs = _rowset_meta->fs();
+    if (!fs) {

Review Comment:
   We print WARNING LOG in RowsetMeta->fs()



##########
be/src/olap/rowset/beta_rowset_writer.cpp:
##########
@@ -304,29 +270,28 @@ RowsetSharedPtr BetaRowsetWriter::build() {
 
 Status BetaRowsetWriter::_create_segment_writer(
         std::unique_ptr<segment_v2::SegmentWriter>* writer) {
-    auto path_desc =
-            BetaRowset::segment_file_path(_context.path_desc, _context.rowset_id, _num_segment++);
-    // TODO(lingbin): should use a more general way to get BlockManager object
-    // and tablets with the same type should share one BlockManager object;
-    fs::BlockManager* block_mgr = fs::fs_util::block_manager(_context.path_desc);
-    std::unique_ptr<fs::WritableBlock> wblock;
-    fs::CreateBlockOptions opts(path_desc);
-    DCHECK(block_mgr != nullptr);
-    Status st = block_mgr->create_block(opts, &wblock);
+    auto path = BetaRowset::local_segment_path(_context.tablet_path, _context.rowset_id,
+                                               _num_segment++);
+    auto fs = _rowset_meta->fs();
+    if (!fs) {

Review Comment:
   We print WARNING LOG in RowsetMeta::fs()



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] github-actions[bot] commented on pull request #10280: [Feature] move cold data to object storage without losing any feature(BE)

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on PR #10280:
URL: https://github.com/apache/doris/pull/10280#issuecomment-1178530190

   PR approved by at least one committer and no changes requested.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] platoneko commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
platoneko commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r911968188


##########
be/src/olap/tablet.cpp:
##########
@@ -1636,4 +1652,160 @@ std::shared_ptr<MemTracker>& Tablet::get_compaction_mem_tracker(CompactionType c
     }
 }
 
+Status Tablet::cooldown() {
+    std::unique_lock schema_change_lock(_schema_change_lock, std::try_to_lock);
+    if (!schema_change_lock.owns_lock()) {
+        LOG(WARNING) << "schema change is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    // Check executing serially with compaction task.
+    std::unique_lock base_compaction_lock(_base_compaction_lock, std::try_to_lock);
+    if (!base_compaction_lock.owns_lock()) {
+        LOG(WARNING) << "base compaction is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    std::unique_lock cumu_compaction_lock(_cumulative_compaction_lock, std::try_to_lock);
+    if (!cumu_compaction_lock.owns_lock()) {
+        LOG(WARNING) << "cumulative compaction is running. tablet=" << tablet_id();
+        return Status::OLAPInternalError(OLAP_ERR_BE_TRY_BE_LOCK_ERROR);
+    }
+    auto dest_fs = io::FileSystemMap::instance()->get(cooldown_resource());
+    if (!dest_fs) {
+        return Status::OLAPInternalError(OLAP_ERR_NOT_INITED);
+    }
+    DCHECK(dest_fs->type() == io::FileSystemType::S3);
+    auto old_rowset = pick_cooldown_rowset();
+    if (!old_rowset) {
+        LOG(WARNING) << "Cannot pick cooldown rowset in tablet " << tablet_id();
+        return Status::OK();
+    }
+    RowsetId new_rowset_id = StorageEngine::instance()->next_rowset_id();
+
+    auto start = std::chrono::steady_clock::now();
+
+    RETURN_IF_ERROR(old_rowset->upload_to(reinterpret_cast<io::RemoteFileSystem*>(dest_fs.get()),
+                                          new_rowset_id));
+
+    auto duration = std::chrono::duration<float>(std::chrono::steady_clock::now() - start);
+    LOG(INFO) << "Upload rowset " << old_rowset->version() << " " << new_rowset_id.to_string()
+              << " to " << dest_fs->root_path().native() << ", tablet_id=" << tablet_id()
+              << ", duration=" << duration.count() << ", capacity=" << old_rowset->data_disk_size()
+              << ", tp=" << old_rowset->data_disk_size() / duration.count();
+
+    // gen a new rowset
+    auto new_rowset_meta = std::make_shared<RowsetMeta>(*old_rowset->rowset_meta());
+    new_rowset_meta->set_rowset_id(new_rowset_id);
+    new_rowset_meta->set_resource_id(dest_fs->resource_id());
+    new_rowset_meta->set_fs(dest_fs);
+    new_rowset_meta->set_creation_time(time(nullptr));
+    RowsetSharedPtr new_rowset;
+    RowsetFactory::create_rowset(&_schema, _tablet_path, std::move(new_rowset_meta), &new_rowset);
+
+    std::vector to_add {std::move(new_rowset)};
+    std::vector to_delete {std::move(old_rowset)};
+
+    std::unique_lock meta_wlock(_meta_lock);
+    modify_rowsets(to_add, to_delete);
+    save_meta();
+    return Status::OK();
+}
+
+RowsetSharedPtr Tablet::pick_cooldown_rowset() {
+    RowsetSharedPtr rowset;
+    {
+        std::shared_lock meta_rlock(_meta_lock);
+
+        // We pick the rowset with smallest start version in local.
+        int64_t smallest_version = std::numeric_limits<int64_t>::max();
+        for (const auto& it : _rs_version_map) {
+            auto& rs = it.second;
+            if (rs->is_local() && rs->start_version() < smallest_version) {
+                smallest_version = rs->start_version();
+                rowset = rs;
+            }
+        }
+    }
+    return rowset;
+}
+
+bool Tablet::need_cooldown(int64_t* cooldown_timestamp, size_t* file_size) {
+    // std::shared_lock meta_rlock(_meta_lock);
+    if (cooldown_resource().empty()) {

Review Comment:
   yeah, i will change it to storage_policy soon,



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] platoneko commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
platoneko commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r911967400


##########
be/src/olap/base_compaction.cpp:
##########
@@ -106,14 +106,11 @@ Status BaseCompaction::pick_rowsets_to_compact() {
     }
 
     // 2. the ratio between base rowset and all input cumulative rowsets reaches the threshold
-    int64_t base_size = 0;
+    // `_input_rowsets` has been sorted by end version, so we consider `_input_rowsets[0]` is the base rowset.
+    int64_t base_size = _input_rowsets.front()->data_disk_size();
     int64_t cumulative_total_size = 0;
-    for (auto& rowset : _input_rowsets) {
-        if (rowset->start_version() != 0) {
-            cumulative_total_size += rowset->data_disk_size();
-        } else {
-            base_size = rowset->data_disk_size();
-        }
+    for (auto it = _input_rowsets.begin() + 1; it != _input_rowsets.end(); ++it) {

Review Comment:
   _input_rowsets has been sorted at line 89, and _check_rowset_overlapping at line 91. So i think it has been in order.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] platoneko commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
platoneko commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r914395465


##########
be/src/agent/task_worker_pool.cpp:
##########
@@ -437,6 +443,11 @@ void TaskWorkerPool::_drop_tablet_worker_thread_callback() {
             StorageEngine::instance()->txn_manager()->force_rollback_tablet_related_txns(
                     dropped_tablet->data_dir()->get_meta(), drop_tablet_req.tablet_id,
                     drop_tablet_req.schema_hash, dropped_tablet->tablet_uid());
+            // We remove remote rowset directly.
+            // TODO(cyx): do remove in background
+            if (drop_tablet_req.is_drop_table_or_partition) {
+                dropped_tablet->remove_all_remote_rowsets();

Review Comment:
   We will solve this in next PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] platoneko commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
platoneko commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r914394170


##########
fe/fe-core/src/main/java/org/apache/doris/task/DropReplicaTask.java:
##########
@@ -23,11 +23,13 @@
 public class DropReplicaTask extends AgentTask {
     private int schemaHash; // set -1L as unknown
     private long replicaId;
+    private boolean isDropTableOrPartition;

Review Comment:
   FE should distinguish this DropReplicaTask is to "drop table/partition" or just "drop replica". When "drop replica", BE should not delete rowsets on remote storage, as these rowsets may be used by other BEs.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] pengxiangyu commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
pengxiangyu commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r909168589


##########
be/src/agent/task_worker_pool.cpp:
##########
@@ -1683,9 +1701,64 @@ void TaskWorkerPool::_submit_table_compaction_worker_thread_callback() {
     }
 }
 
-void TaskWorkerPool::_storage_medium_migrate_v2_worker_thread_callback() {
+void TaskWorkerPool::_storage_refresh_storage_policy_worker_thread_callback() {

Review Comment:
   why do you delete _storage_medium_migrate_v2_worker_thread_callback?do you want to implement it in BE?



##########
fe/fe-core/src/main/java/org/apache/doris/analysis/ModifyTablePropertiesClause.java:
##########
@@ -91,8 +101,11 @@ public void analyze(Analyzer analyzer) throws AnalysisException {
             throw new AnalysisException("Alter tablet type not supported");
         } else if (properties.containsKey(PropertyAnalyzer.PROPERTIES_REMOTE_STORAGE_RESOURCE)) {
             throw new AnalysisException("Alter table remote_storage_resource is not supported.");
+        } else if (properties.containsKey(PropertyAnalyzer.PROPERTIES_STORAGE_POLICY)) {
+            this.needTableStable = false;
+            setStoragePolicy(properties.getOrDefault(PropertyAnalyzer.PROPERTIES_STORAGE_POLICY, ""));
         } else {
-            throw new AnalysisException("Unknown table property: " + properties.keySet());
+                throw new AnalysisException("Unknown table property: " + properties.keySet());

Review Comment:
   unexpect space



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] pengxiangyu commented on a diff in pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
pengxiangyu commented on code in PR #10280:
URL: https://github.com/apache/doris/pull/10280#discussion_r912594141


##########
fe/fe-core/src/main/java/org/apache/doris/common/util/PropertyAnalyzer.java:
##########
@@ -213,7 +222,7 @@ public static DataProperty analyzeDataProperty(Map<String, String> properties, D
         }
 
         Preconditions.checkNotNull(storageMedium);
-        return new DataProperty(storageMedium, cooldownTimeStamp, remoteStorageResourceName, remoteCooldownTimeStamp);
+        return new DataProperty(storageMedium, cooldownTimeStamp, remoteStorageResourceName, remoteCooldownTimeStamp, storagePolicy);

Review Comment:
   If MigrationHandler is on BE, remoteStorageResourceName and remoteCooldownTimeStamp are not useful, just remove them from DataProperty.



##########
fe/fe-core/src/main/java/org/apache/doris/task/DropReplicaTask.java:
##########
@@ -23,11 +23,13 @@
 public class DropReplicaTask extends AgentTask {
     private int schemaHash; // set -1L as unknown
     private long replicaId;
+    private boolean isDropTableOrPartition;

Review Comment:
   Add some annotation for this, what does true mean, and what abort false?



##########
fe/fe-core/src/main/java/org/apache/doris/catalog/TableProperty.java:
##########
@@ -139,6 +141,15 @@ public TableProperty buildInMemory() {
         return this;
     }
 
+    public TableProperty buildStoragePolicy() {
+        storagePolicy = properties.getOrDefault(PropertyAnalyzer.PROPERTIES_STORAGE_POLICY, "");

Review Comment:
   default is ""?How abort use default_storage_policy in Config?



##########
fe/fe-core/src/main/java/org/apache/doris/catalog/ResourceMgr.java:
##########
@@ -82,6 +85,20 @@ public void createResource(CreateResourceStmt stmt) throws DdlException {
         LOG.info("Create resource success. Resource: {}", resource);
     }
 
+    public void createDefaultStoragePolicy() throws DdlException {

Review Comment:
   StoragePolicyResource is not needed. Use StoragePolicy instead.



##########
fe/fe-core/src/main/java/org/apache/doris/catalog/DataProperty.java:
##########
@@ -46,6 +46,8 @@ public class DataProperty implements Writable {
     private String remoteStorageResourceName;

Review Comment:
   If Migration Handler is in BE, remoteStorageResourceName and remoteCooldownTimeMs are not needed, just remove them. The serialization for this class is a json.



##########
fe/fe-core/src/main/java/org/apache/doris/catalog/StoragePolicyResource.java:
##########
@@ -0,0 +1,235 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.catalog;
+
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.proc.BaseProcResult;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.gson.annotations.SerializedName;
+import org.apache.doris.system.SystemInfoService;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.NotifyUpdateStoragePolicyTask;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Policy resource for olap table
+ *
+ * Syntax:
+ * CREATE RESOURCE "storage_policy_name"
+ * PROPERTIES(
+ *      "type"="storage_policy",
+ *      "cooldown_datetime" = "2022-06-01", // time when data is transfter to medium
+ *      "cooldown_ttl" = "3600", // data is transfter to medium after 1 hour
+ *      "s3_*"
+ * );
+ */
+public class StoragePolicyResource extends Resource {

Review Comment:
   This class is replaced by StoragePolicy, as discuss before, it is a policy, but not a resource.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] dataroaring commented on pull request #10280: [Feature] move cold data to object storage without losing any feature

Posted by GitBox <gi...@apache.org>.
dataroaring commented on PR #10280:
URL: https://github.com/apache/doris/pull/10280#issuecomment-1172904244

   [冷热分离详细设计.pdf](https://github.com/apache/doris/files/9033096/default.pdf) @pengxiangyu 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] github-actions[bot] commented on pull request #10280: [Feature] move cold data to object storage without losing any feature(BE)

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on PR #10280:
URL: https://github.com/apache/doris/pull/10280#issuecomment-1178530211

   PR approved by anyone and no changes requested.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [doris] morningman merged pull request #10280: [Feature] move cold data to object storage without losing any feature(BE)

Posted by GitBox <gi...@apache.org>.
morningman merged PR #10280:
URL: https://github.com/apache/doris/pull/10280


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org