You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by GitBox <gi...@apache.org> on 2020/03/25 03:38:39 UTC

[GitHub] [incubator-doris] xy720 opened a new pull request #3191: DeleteV2

xy720 opened a new pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191
 
 
   #3190

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407066340
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,170 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    private long signature;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
+    private Set<PushTask> pushTasks;
+    private DeleteInfo deleteInfo;
+
+    public DeleteJob(long transactionId, DeleteInfo deleteInfo) {
+        this.signature = transactionId;
+        this.deleteInfo = deleteInfo;
+        totalTablets = Sets.newHashSet();
+        finishedTablets = Sets.newHashSet();
+        quorumTablets = Sets.newHashSet();
+        tabletDeleteInfoMap = Maps.newConcurrentMap();
+        pushTasks = Sets.newHashSet();
+        state = DeleteState.UN_QUORUM;
+    }
+
+    public void checkQuorum() throws DdlException {
+        long dbId = deleteInfo.getDbId();
+        long tableId = deleteInfo.getTableId();
+        long partitionId = deleteInfo.getPartitionId();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            LOG.warn("can not find database "+ dbId +" when commit delete");
 
 Review comment:
   if  `db == null`, the `state` will be left as `UN_QUORUM`, and then the `DeleteHandler` will still try to finish the job, which is wrong.
   
   You need to clarify the behavior of this function, such as what the return value is, when an exception is thrown, and whether the thrown exception can be properly handled by the caller.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408858798
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/qe/StmtExecutor.java
 ##########
 @@ -867,6 +867,8 @@ private void handleDdlStmt() {
         try {
             DdlExecutor.execute(context.getCatalog(), (DdlStmt) parsedStmt, originStmt);
             context.getState().setOk();
+        } catch (QueryStateException e) {
+            context.getState().setOk(0L, 0, e.getMessage());
 
 Review comment:
   1. QueryStateException should derived from UserException.
   2. Better to just create a `QueryState` inside the QueryStateException, and here you can just call `context.setState(e.getQueryState());`. If other people use this exception, he will know how to use it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r405608713
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
+                    }
+                    // remove task isQuorum or isCanceled
+                    removeTask(task);
+                } catch (InterruptedException e) {
+                    // do nothing
+                }
+            }
+        }
+    }
+
+    private void commitTask(DeleteTask task, Database db) {
+        long transactionId = task.getSignature();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        db.writeLock();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : task.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            globalTransactionMgr.commitTransaction(db.getId(), transactionId, tabletCommitInfos);
+        } catch (UserException e) {
+            LOG.warn("errors while commit delete, transaction [{}], reason is {}",
+                    transactionState.getTransactionId(),  e);
+            cancelTask(task, transactionState.getReason());
+        } finally {
+            db.writeUnlock();
+        }
+    }
+
+    public void removeTask(DeleteTask task) {
+        task.unJoin();
+        writeLock();
+        try {
+            long signature = task.getSignature();
+            if (idToDeleteTask.containsKey(signature)) {
+                idToDeleteTask.remove(signature);
+            }
+            for (PushTask pushTask : task.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            if (task.isQuorum()) {
+                DeleteInfo deleteInfo = task.getDeleteInfo();
+                long dbId = deleteInfo.getDbId();
+                if (dbToDeleteInfos.containsKey(dbId)) {
+                    dbToDeleteInfos.get(dbId).add(deleteInfo);
+                } else {
+                    List<DeleteInfo> deleteInfoList = Lists.newArrayList();
+                    deleteInfoList.add(deleteInfo);
+                    dbToDeleteInfos.put(dbId, deleteInfoList);
+                }
+                Catalog.getInstance().getEditLog().logFinishSyncDelete(deleteInfo);
+            }
+        } finally {
+            writeUnlock();
+        }
+    }
+
+    public boolean cancelTask(DeleteTask task, String reason) {
+        try {
+            if (task != null) {
+                task.setCancel();
+                Catalog.getCurrentGlobalTransactionMgr().abortTransaction(
+                        task.getSignature(), reason);
+                return true;
+            }
+        } catch (Exception e) {
+            LOG.info("errors while abort transaction", e);
+        }
+        return false;
+    }
+
+    private void checkDeleteV2(OlapTable table, Partition partition, List<Predicate> conditions, List<String> deleteConditions, boolean preCheck)
+            throws DdlException {
+
+        // check partition state
+        Partition.PartitionState state = partition.getState();
+        if (state != Partition.PartitionState.NORMAL) {
+            // ErrorReport.reportDdlException(ErrorCode.ERR_BAD_PARTITION_STATE, partition.getName(), state.name());
+            throw new DdlException("Partition[" + partition.getName() + "]' state is not NORMAL: " + state.name());
+        }
+
+        // check condition column is key column and condition value
+        Map<String, Column> nameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+        for (Column column : table.getBaseSchema()) {
+            nameToColumn.put(column.getName(), column);
+        }
+        for (Predicate condition : conditions) {
+            SlotRef slotRef = null;
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                slotRef = (SlotRef) binaryPredicate.getChild(0);
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                slotRef = (SlotRef) isNullPredicate.getChild(0);
+            }
+            String columnName = slotRef.getColumnName();
+            if (!nameToColumn.containsKey(columnName)) {
+                ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, table.getName());
+            }
+
+            Column column = nameToColumn.get(columnName);
+            if (!column.isKey()) {
+                // ErrorReport.reportDdlException(ErrorCode.ERR_NOT_KEY_COLUMN, columnName);
+                throw new DdlException("Column[" + columnName + "] is not key column");
+            }
+
+            if (condition instanceof BinaryPredicate) {
+                String value = null;
+                try {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    value = ((LiteralExpr) binaryPredicate.getChild(1)).getStringValue();
+                    LiteralExpr.create(value, Type.fromPrimitiveType(column.getDataType()));
+                } catch (AnalysisException e) {
+                    // ErrorReport.reportDdlException(ErrorCode.ERR_INVALID_VALUE, value);
+                    throw new DdlException("Invalid column value[" + value + "]");
+                }
+            }
+
+            // set schema column name
+            slotRef.setCol(column.getName());
+        }
+        Map<Long, List<Column>> indexIdToSchema = table.getIndexIdToSchema();
+        for (MaterializedIndex index : partition.getMaterializedIndices(MaterializedIndex.IndexExtState.VISIBLE)) {
+            // check table has condition column
+            Map<String, Column> indexColNameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+            for (Column column : indexIdToSchema.get(index.getId())) {
+                indexColNameToColumn.put(column.getName(), column);
+            }
+            String indexName = table.getIndexNameById(index.getId());
+            for (Predicate condition : conditions) {
+                String columnName = null;
+                if (condition instanceof BinaryPredicate) {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    columnName = ((SlotRef) binaryPredicate.getChild(0)).getColumnName();
+                } else if (condition instanceof IsNullPredicate) {
+                    IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                    columnName = ((SlotRef) isNullPredicate.getChild(0)).getColumnName();
+                }
+                Column column = indexColNameToColumn.get(columnName);
+                if (column == null) {
+                    ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, indexName);
+                }
+
+                if (table.getKeysType() == KeysType.DUP_KEYS && !column.isKey()) {
+                    throw new DdlException("Column[" + columnName + "] is not key column in index[" + indexName + "]");
+                }
+            }
+        }
+
+        if (deleteConditions == null) {
+            return;
+        }
+
+        // save delete conditions
+        for (Predicate condition : conditions) {
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                SlotRef slotRef = (SlotRef) binaryPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName).append(" ").append(binaryPredicate.getOp().name()).append(" \"")
+                        .append(((LiteralExpr) binaryPredicate.getChild(1)).getStringValue()).append("\"");
+                deleteConditions.add(sb.toString());
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                SlotRef slotRef = (SlotRef) isNullPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName);
+                if (isNullPredicate.isNotNull()) {
+                    sb.append(" IS NOT NULL");
+                } else {
+                    sb.append(" IS NULL");
+                }
+                deleteConditions.add(sb.toString());
+            }
+        }
+    }
+
+    // show delete stmt
+    public List<List<Comparable>> getDeleteInfosByDb(long dbId, boolean forUser) {
+        LinkedList<List<Comparable>> infos = new LinkedList<List<Comparable>>();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            return infos;
+        }
+
+        String dbName = db.getFullName();
+        readLock();
+        try {
+            List<DeleteInfo> deleteInfos = dbToDeleteInfos.get(dbId);
+            if (deleteInfos == null) {
+                return infos;
+            }
+
+            for (DeleteInfo deleteInfo : deleteInfos) {
+
+                if (!Catalog.getCurrentCatalog().getAuth().checkTblPriv(ConnectContext.get(), dbName,
+                        deleteInfo.getTableName(),
+                        PrivPredicate.LOAD)) {
+                    continue;
+                }
+
+
+                List<Comparable> info = Lists.newArrayList();
+                if (!forUser) {
+                    // There is no job for delete, set job id to -1
+                    info.add(-1L);
+                    info.add(deleteInfo.getTableId());
+                }
+                info.add(deleteInfo.getTableName());
+                if (!forUser) {
+                    info.add(deleteInfo.getPartitionId());
+                }
+                info.add(deleteInfo.getPartitionName());
+
+                info.add(TimeUtils.longToTimeString(deleteInfo.getCreateTimeMs()));
+                String conds = Joiner.on(", ").join(deleteInfo.getDeleteConditions());
+                info.add(conds);
+
+                if (!forUser) {
+                    info.add(deleteInfo.getPartitionVersion());
+                    info.add(deleteInfo.getPartitionVersionHash());
+                }
+                // for loading state, should not display loading, show deleting instead
+//                if (loadJob.getState() == LoadJob.JobState.LOADING) {
+//                    info.add("DELETING");
+//                } else {
+//                    info.add(loadJob.getState().name());
+//                }
+                info.add("FINISHED");
+                infos.add(info);
+            }
+
+        } finally {
+            readUnlock();
+        }
+
+        // sort by createTimeMs
+        int sortIndex;
+        if (!forUser) {
+            sortIndex = 5;
+        } else {
+            sortIndex = 2;
+        }
+        ListComparator<List<Comparable>> comparator = new ListComparator<List<Comparable>>(sortIndex);
+        Collections.sort(infos, comparator);
+        return infos;
+    }
+
+    public boolean addFinishedReplica(Long transactionId, long tabletId, Replica replica) {
+        writeLock();
+        try {
+            DeleteTask task = idToDeleteTask.get(transactionId);
+            if (task != null) {
+                return task.addFinishedReplica(tabletId, replica);
+            } else {
+                return false;
+            }
+        } finally {
+            writeUnlock();
+        }
+    }
+
+    public void replayDelete(DeleteInfo deleteInfo, Catalog catalog) {
+        Database db = catalog.getDb(deleteInfo.getDbId());
+        db.writeLock();
+        try {
+            writeLock();
+            try {
+                // add to deleteInfos
+                long dbId = deleteInfo.getDbId();
+                List<DeleteInfo> deleteInfos = dbToDeleteInfos.get(dbId);
+                if (deleteInfos == null) {
+                    deleteInfos = Lists.newArrayList();
+                    dbToDeleteInfos.put(dbId, deleteInfos);
+                }
+                deleteInfos.add(deleteInfo);
+            } finally {
+                writeUnlock();
+            }
+        } finally {
+            db.writeUnlock();
+        }
+    }
+
+    // for delete handler, we only persist those delete already finished.
+    @Override
+    public void write(DataOutput out) throws IOException {
+        out.writeInt(dbToDeleteInfos.size());
 
 Review comment:
   Use GSON serde

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407065509
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,549 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        MarkedCountDownLatch<Long, Long> countDownLatch;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+            deleteJob = new DeleteJob(transactionId, deleteInfo);
+            idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+            // count total replica num
+            int totalReplicaNum = 0;
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                for (Tablet tablet : index.getTablets()) {
+                    totalReplicaNum += tablet.getReplicas().size();
+                }
+            }
+            countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        long backendId = replica.getBackendId();
+                        countDownLatch.addMark(backendId, tabletId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
 
 Review comment:
   `isSchemaChaning` is useless, just set it to false

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408084028
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,185 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    // jobId(listenerId). use in beginTransaction to callback function
+    private long id;
+    // transaction id.
+    private long signature;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
 
 Review comment:
   There will be concurrent issue when visiting `tabletDeleteInfoMap` from different thread: the report task thread and DeleteHandler thread. So you should use ConcurrentHashMap

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r405599797
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
 
 Review comment:
   If begin transaction failed, exception will be thrown. So no need to check here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r406810388
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
+                    }
+                    // remove task isQuorum or isCanceled
+                    removeTask(task);
+                } catch (InterruptedException e) {
+                    // do nothing
+                }
+            }
+        }
+    }
+
+    private void commitTask(DeleteTask task, Database db) {
+        long transactionId = task.getSignature();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        db.writeLock();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : task.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            globalTransactionMgr.commitTransaction(db.getId(), transactionId, tabletCommitInfos);
+        } catch (UserException e) {
+            LOG.warn("errors while commit delete, transaction [{}], reason is {}",
+                    transactionState.getTransactionId(),  e);
+            cancelTask(task, transactionState.getReason());
+        } finally {
+            db.writeUnlock();
+        }
+    }
+
+    public void removeTask(DeleteTask task) {
+        task.unJoin();
+        writeLock();
+        try {
+            long signature = task.getSignature();
+            if (idToDeleteTask.containsKey(signature)) {
+                idToDeleteTask.remove(signature);
+            }
+            for (PushTask pushTask : task.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            if (task.isQuorum()) {
+                DeleteInfo deleteInfo = task.getDeleteInfo();
+                long dbId = deleteInfo.getDbId();
+                if (dbToDeleteInfos.containsKey(dbId)) {
+                    dbToDeleteInfos.get(dbId).add(deleteInfo);
+                } else {
+                    List<DeleteInfo> deleteInfoList = Lists.newArrayList();
+                    deleteInfoList.add(deleteInfo);
+                    dbToDeleteInfos.put(dbId, deleteInfoList);
+                }
+                Catalog.getInstance().getEditLog().logFinishSyncDelete(deleteInfo);
+            }
+        } finally {
+            writeUnlock();
+        }
+    }
+
+    public boolean cancelTask(DeleteTask task, String reason) {
+        try {
+            if (task != null) {
+                task.setCancel();
+                Catalog.getCurrentGlobalTransactionMgr().abortTransaction(
+                        task.getSignature(), reason);
+                return true;
+            }
+        } catch (Exception e) {
+            LOG.info("errors while abort transaction", e);
+        }
+        return false;
+    }
+
+    private void checkDeleteV2(OlapTable table, Partition partition, List<Predicate> conditions, List<String> deleteConditions, boolean preCheck)
+            throws DdlException {
+
+        // check partition state
+        Partition.PartitionState state = partition.getState();
+        if (state != Partition.PartitionState.NORMAL) {
+            // ErrorReport.reportDdlException(ErrorCode.ERR_BAD_PARTITION_STATE, partition.getName(), state.name());
+            throw new DdlException("Partition[" + partition.getName() + "]' state is not NORMAL: " + state.name());
+        }
+
+        // check condition column is key column and condition value
+        Map<String, Column> nameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+        for (Column column : table.getBaseSchema()) {
+            nameToColumn.put(column.getName(), column);
+        }
+        for (Predicate condition : conditions) {
+            SlotRef slotRef = null;
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                slotRef = (SlotRef) binaryPredicate.getChild(0);
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                slotRef = (SlotRef) isNullPredicate.getChild(0);
+            }
+            String columnName = slotRef.getColumnName();
+            if (!nameToColumn.containsKey(columnName)) {
+                ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, table.getName());
+            }
+
+            Column column = nameToColumn.get(columnName);
+            if (!column.isKey()) {
+                // ErrorReport.reportDdlException(ErrorCode.ERR_NOT_KEY_COLUMN, columnName);
+                throw new DdlException("Column[" + columnName + "] is not key column");
+            }
+
+            if (condition instanceof BinaryPredicate) {
+                String value = null;
+                try {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    value = ((LiteralExpr) binaryPredicate.getChild(1)).getStringValue();
+                    LiteralExpr.create(value, Type.fromPrimitiveType(column.getDataType()));
+                } catch (AnalysisException e) {
+                    // ErrorReport.reportDdlException(ErrorCode.ERR_INVALID_VALUE, value);
+                    throw new DdlException("Invalid column value[" + value + "]");
+                }
+            }
+
+            // set schema column name
+            slotRef.setCol(column.getName());
+        }
+        Map<Long, List<Column>> indexIdToSchema = table.getIndexIdToSchema();
+        for (MaterializedIndex index : partition.getMaterializedIndices(MaterializedIndex.IndexExtState.VISIBLE)) {
+            // check table has condition column
+            Map<String, Column> indexColNameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+            for (Column column : indexIdToSchema.get(index.getId())) {
+                indexColNameToColumn.put(column.getName(), column);
+            }
+            String indexName = table.getIndexNameById(index.getId());
+            for (Predicate condition : conditions) {
+                String columnName = null;
+                if (condition instanceof BinaryPredicate) {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    columnName = ((SlotRef) binaryPredicate.getChild(0)).getColumnName();
+                } else if (condition instanceof IsNullPredicate) {
+                    IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                    columnName = ((SlotRef) isNullPredicate.getChild(0)).getColumnName();
+                }
+                Column column = indexColNameToColumn.get(columnName);
+                if (column == null) {
+                    ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, indexName);
+                }
+
+                if (table.getKeysType() == KeysType.DUP_KEYS && !column.isKey()) {
+                    throw new DdlException("Column[" + columnName + "] is not key column in index[" + indexName + "]");
+                }
+            }
+        }
+
+        if (deleteConditions == null) {
+            return;
+        }
+
+        // save delete conditions
+        for (Predicate condition : conditions) {
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                SlotRef slotRef = (SlotRef) binaryPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName).append(" ").append(binaryPredicate.getOp().name()).append(" \"")
+                        .append(((LiteralExpr) binaryPredicate.getChild(1)).getStringValue()).append("\"");
+                deleteConditions.add(sb.toString());
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                SlotRef slotRef = (SlotRef) isNullPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName);
+                if (isNullPredicate.isNotNull()) {
+                    sb.append(" IS NOT NULL");
+                } else {
+                    sb.append(" IS NULL");
+                }
+                deleteConditions.add(sb.toString());
+            }
+        }
+    }
+
+    // show delete stmt
+    public List<List<Comparable>> getDeleteInfosByDb(long dbId, boolean forUser) {
+        LinkedList<List<Comparable>> infos = new LinkedList<List<Comparable>>();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            return infos;
+        }
+
+        String dbName = db.getFullName();
+        readLock();
+        try {
+            List<DeleteInfo> deleteInfos = dbToDeleteInfos.get(dbId);
+            if (deleteInfos == null) {
+                return infos;
+            }
+
+            for (DeleteInfo deleteInfo : deleteInfos) {
+
+                if (!Catalog.getCurrentCatalog().getAuth().checkTblPriv(ConnectContext.get(), dbName,
+                        deleteInfo.getTableName(),
+                        PrivPredicate.LOAD)) {
+                    continue;
+                }
+
+
+                List<Comparable> info = Lists.newArrayList();
+                if (!forUser) {
+                    // There is no job for delete, set job id to -1
+                    info.add(-1L);
+                    info.add(deleteInfo.getTableId());
+                }
+                info.add(deleteInfo.getTableName());
+                if (!forUser) {
+                    info.add(deleteInfo.getPartitionId());
+                }
+                info.add(deleteInfo.getPartitionName());
+
+                info.add(TimeUtils.longToTimeString(deleteInfo.getCreateTimeMs()));
+                String conds = Joiner.on(", ").join(deleteInfo.getDeleteConditions());
+                info.add(conds);
+
+                if (!forUser) {
+                    info.add(deleteInfo.getPartitionVersion());
+                    info.add(deleteInfo.getPartitionVersionHash());
+                }
+                // for loading state, should not display loading, show deleting instead
+//                if (loadJob.getState() == LoadJob.JobState.LOADING) {
+//                    info.add("DELETING");
+//                } else {
+//                    info.add(loadJob.getState().name());
+//                }
+                info.add("FINISHED");
+                infos.add(info);
+            }
+
+        } finally {
+            readUnlock();
+        }
+
+        // sort by createTimeMs
+        int sortIndex;
+        if (!forUser) {
+            sortIndex = 5;
+        } else {
+            sortIndex = 2;
+        }
+        ListComparator<List<Comparable>> comparator = new ListComparator<List<Comparable>>(sortIndex);
+        Collections.sort(infos, comparator);
+        return infos;
+    }
+
+    public boolean addFinishedReplica(Long transactionId, long tabletId, Replica replica) {
+        writeLock();
+        try {
+            DeleteTask task = idToDeleteTask.get(transactionId);
+            if (task != null) {
+                return task.addFinishedReplica(tabletId, replica);
+            } else {
+                return false;
+            }
+        } finally {
+            writeUnlock();
+        }
+    }
+
+    public void replayDelete(DeleteInfo deleteInfo, Catalog catalog) {
+        Database db = catalog.getDb(deleteInfo.getDbId());
+        db.writeLock();
+        try {
+            writeLock();
+            try {
+                // add to deleteInfos
+                long dbId = deleteInfo.getDbId();
+                List<DeleteInfo> deleteInfos = dbToDeleteInfos.get(dbId);
+                if (deleteInfos == null) {
+                    deleteInfos = Lists.newArrayList();
+                    dbToDeleteInfos.put(dbId, deleteInfos);
+                }
+                deleteInfos.add(deleteInfo);
+            } finally {
+                writeUnlock();
+            }
+        } finally {
+            db.writeUnlock();
+        }
+    }
+
+    // for delete handler, we only persist those delete already finished.
+    @Override
+    public void write(DataOutput out) throws IOException {
+        out.writeInt(dbToDeleteInfos.size());
 
 Review comment:
   There is problem that using GSON read() have to create a new object, but I wish to keep the old object and only use a readField() method.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408865865
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,190 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    // jobId(listenerId). use in beginTransaction to callback function
+    private long id;
+    // transaction id.
+    private long signature;
+    private String label;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
+    private Set<PushTask> pushTasks;
+    private DeleteInfo deleteInfo;
+
+    public DeleteJob(long id, long transactionId, String label, DeleteInfo deleteInfo) {
+        this.id = id;
+        this.signature = transactionId;
+        this.label = label;
+        this.deleteInfo = deleteInfo;
+        totalTablets = Sets.newHashSet();
+        finishedTablets = Sets.newHashSet();
+        quorumTablets = Sets.newHashSet();
+        tabletDeleteInfoMap = Maps.newConcurrentMap();
+        pushTasks = Sets.newHashSet();
+        state = DeleteState.UN_QUORUM;
+    }
+
+    /**
+     * check and update if this job's state is QUORUM_FINISHED or FINISHED
+     * The meaning of state:
+     * QUORUM_FINISHED: For each tablet there are more than half of its replicas have been finished
+     * FINISHED: All replicas of this jobs have finished
+     */
+    public void checkAndUpdateQuorum() throws MetaNotFoundException {
+        long dbId = deleteInfo.getDbId();
+        long tableId = deleteInfo.getTableId();
+        long partitionId = deleteInfo.getPartitionId();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            throw new MetaNotFoundException("can not find database "+ dbId +" when commit delete");
+        }
+
+        short replicaNum = -1;
+        db.readLock();
+        try {
+            OlapTable table = (OlapTable) db.getTable(tableId);
+            if (table == null) {
+                throw new MetaNotFoundException("can not find table "+ tableId +" when commit delete");
+            }
+            replicaNum = table.getPartitionInfo().getReplicationNum(partitionId);
+        } finally {
+            db.readUnlock();
+        }
+
+        short quorumNum = (short) (replicaNum / 2 + 1);
+        for (TabletDeleteInfo tDeleteInfo : getTabletDeleteInfo()) {
+            if (tDeleteInfo.getFinishedReplicas().size() == replicaNum) {
+                finishedTablets.add(tDeleteInfo.getTabletId());
+            }
+            if (tDeleteInfo.getFinishedReplicas().size() >= quorumNum) {
+                quorumTablets.add(tDeleteInfo.getTabletId());
+            }
+        }
+        LOG.info("check delete job quorum, transaction id: {}, total tablets: {}, quorum tablets: {},",
+                signature, totalTablets.size(), quorumTablets.size());
+
+        if (finishedTablets.containsAll(totalTablets)) {
+            setState(DeleteState.FINISHED);
+        } else if (quorumTablets.containsAll(totalTablets)) {
+            setState(DeleteState.QUORUM_FINISHED);
+        }
+    }
+
+    public void setState(DeleteState state) {
+        this.state = state;
+    }
+
+    public DeleteState getState() {
+        return this.state;
+    }
+
+    public boolean addTablet(long tabletId) {
+        return totalTablets.add(tabletId);
+    }
+
+    public boolean addPushTask(PushTask pushTask) {
+        return pushTasks.add(pushTask);
+    }
+
+    public boolean addFinishedReplica(long tabletId, Replica replica) {
+        tabletDeleteInfoMap.putIfAbsent(tabletId, new TabletDeleteInfo(tabletId));
+        TabletDeleteInfo tDeleteInfo =  tabletDeleteInfoMap.get(tabletId);
+        synchronized (tDeleteInfo) {
 
 Review comment:
   No need to use `synchronized` here, I think you can just use a `ConcurrentSet` in `TabletDeleteInfo`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r405600226
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
 
 Review comment:
   Use a concurrent map is enough

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r406808944
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
 
 Review comment:
   I've deleted it

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407067834
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,549 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        MarkedCountDownLatch<Long, Long> countDownLatch;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+            deleteJob = new DeleteJob(transactionId, deleteInfo);
+            idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+            // count total replica num
+            int totalReplicaNum = 0;
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                for (Tablet tablet : index.getTablets()) {
+                    totalReplicaNum += tablet.getReplicas().size();
+                }
+            }
+            countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        long backendId = replica.getBackendId();
+                        countDownLatch.addMark(backendId, tabletId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+                        pushTask.setCountDownLatch(countDownLatch);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteJob.addPushTask(pushTask);
+                            deleteJob.addTablet(tabletId);
+                        }
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeoutMs = deleteJob.getTimeout();
+        LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+        boolean ok = false;
+        try {
+            ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+        } catch (InterruptedException e) {
+            LOG.warn("InterruptedException: ", e);
+            ok = false;
+        }
+
+        if (ok) {
+            commitJob(deleteJob, db, timeoutMs);
+        } else {
+            deleteJob.checkQuorum();
+            if (deleteJob.getState() != DeleteState.UN_QUORUM) {
+                long nowQuorumTimeMs = System.currentTimeMillis();
+                long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                // if job's state is finished or stay in quorum_finished for long time, try to commit it.
+                try {
+                    while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                        deleteJob.checkQuorum();
+                        Thread.sleep(1000);
+                        nowQuorumTimeMs = System.currentTimeMillis();
+                    }
+                    commitJob(deleteJob, db, timeoutMs);
+                } catch (InterruptedException e) {
+                }
+            } else {
+                List<Entry<Long, Long>> unfinishedMarks = countDownLatch.getLeftMarks();
+                // only show at most 5 results
+                List<Entry<Long, Long>> subList = unfinishedMarks.subList(0, Math.min(unfinishedMarks.size(), 5));
+                String errMsg = "Unfinished replicas:" + Joiner.on(", ").join(subList);
+                LOG.warn("delete job timeout: {}, {}", transactionId, errMsg);
+                cancelJob(deleteJob, "delete job timeout");
+                throw new DdlException("failed to delete replicas from job: {}, {}, transactionId, errMsg");
+            }
+        }
+    }
+
+    private void commitJob(DeleteJob job, Database db, long timeout) {
+        long transactionId = job.getTransactionId();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : job.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            boolean isSuccess = globalTransactionMgr.commitAndPublishTransaction(db, transactionId, tabletCommitInfos, timeout);
+            if (!isSuccess) {
+                cancelJob(job, "delete timeout when waiting transaction commit");
 
 Review comment:
   `!isSuccess` does not mean the transaction is failed, it means the transaction is committed and publish timeout (not VISIBLE yet). So you should not cancel the job, but should tell the user the delete is success but will be taking effect later.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407067217
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,549 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        MarkedCountDownLatch<Long, Long> countDownLatch;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+            deleteJob = new DeleteJob(transactionId, deleteInfo);
+            idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+            // count total replica num
+            int totalReplicaNum = 0;
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                for (Tablet tablet : index.getTablets()) {
+                    totalReplicaNum += tablet.getReplicas().size();
+                }
+            }
+            countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        long backendId = replica.getBackendId();
+                        countDownLatch.addMark(backendId, tabletId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+                        pushTask.setCountDownLatch(countDownLatch);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteJob.addPushTask(pushTask);
+                            deleteJob.addTablet(tabletId);
+                        }
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeoutMs = deleteJob.getTimeout();
+        LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+        boolean ok = false;
+        try {
+            ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+        } catch (InterruptedException e) {
+            LOG.warn("InterruptedException: ", e);
+            ok = false;
+        }
+
+        if (ok) {
+            commitJob(deleteJob, db, timeoutMs);
+        } else {
+            deleteJob.checkQuorum();
+            if (deleteJob.getState() != DeleteState.UN_QUORUM) {
+                long nowQuorumTimeMs = System.currentTimeMillis();
+                long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                // if job's state is finished or stay in quorum_finished for long time, try to commit it.
+                try {
+                    while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                        deleteJob.checkQuorum();
+                        Thread.sleep(1000);
+                        nowQuorumTimeMs = System.currentTimeMillis();
+                    }
+                    commitJob(deleteJob, db, timeoutMs);
+                } catch (InterruptedException e) {
 
 Review comment:
   You should handle this exception, for example, abort the transaction.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] kangpinghuang commented on issue #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
kangpinghuang commented on issue #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#issuecomment-604197398
 
 
   could you add some performance optimization result description?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408761807
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/persist/EditLog.java
 ##########
 @@ -378,8 +379,8 @@ public static void loadJournal(Catalog catalog, JournalEntity journal) {
                     break;
                 case OperationType.OP_FINISH_SYNC_DELETE: {
                     DeleteInfo info = (DeleteInfo) journal.getData();
-                    Load load = catalog.getLoadInstance();
-                    load.replayDelete(info, catalog);
+                    DeleteHandler deleteHandler = catalog.getDeleteHandler();
 
 Review comment:
   Now add a new operation OP_FINISH_DELETE

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408082635
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/qe/ShowExecutor.java
 ##########
 @@ -1053,8 +1054,8 @@ private void handleShowDelete() throws AnalysisException {
         }
         long dbId = db.getId();
 
-        Load load = catalog.getLoadInstance();
-        List<List<Comparable>> deleteInfos = load.getDeleteInfosByDb(dbId, true);
+        DeleteHandler deleteHandler = catalog.getDeleteHandler();
 
 Review comment:
   You should also show the delete info from `Load`, or after we upgrade the Doris, the old delete info can not be seen.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r405610446
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
 
 Review comment:
   this logic is wrong.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407066403
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,170 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    private long signature;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
+    private Set<PushTask> pushTasks;
+    private DeleteInfo deleteInfo;
+
+    public DeleteJob(long transactionId, DeleteInfo deleteInfo) {
+        this.signature = transactionId;
+        this.deleteInfo = deleteInfo;
+        totalTablets = Sets.newHashSet();
+        finishedTablets = Sets.newHashSet();
+        quorumTablets = Sets.newHashSet();
+        tabletDeleteInfoMap = Maps.newConcurrentMap();
+        pushTasks = Sets.newHashSet();
+        state = DeleteState.UN_QUORUM;
+    }
+
+    public void checkQuorum() throws DdlException {
+        long dbId = deleteInfo.getDbId();
+        long tableId = deleteInfo.getTableId();
+        long partitionId = deleteInfo.getPartitionId();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            LOG.warn("can not find database "+ dbId +" when commit delete");
+            return;
+        }
+
+        short replicaNum = -1;
+        db.readLock();
+        try {
+            OlapTable table = (OlapTable) db.getTable(tableId);
+            if (table == null) {
+                LOG.warn("can not find table "+ tableId +" when commit delete");
 
 Review comment:
   Same problem to `db==null`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408761574
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,549 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        MarkedCountDownLatch<Long, Long> countDownLatch;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+            deleteJob = new DeleteJob(transactionId, deleteInfo);
+            idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+            // count total replica num
+            int totalReplicaNum = 0;
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                for (Tablet tablet : index.getTablets()) {
+                    totalReplicaNum += tablet.getReplicas().size();
+                }
+            }
+            countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        long backendId = replica.getBackendId();
+                        countDownLatch.addMark(backendId, tabletId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+                        pushTask.setCountDownLatch(countDownLatch);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteJob.addPushTask(pushTask);
+                            deleteJob.addTablet(tabletId);
+                        }
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeoutMs = deleteJob.getTimeout();
+        LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+        boolean ok = false;
+        try {
+            ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+        } catch (InterruptedException e) {
+            LOG.warn("InterruptedException: ", e);
+            ok = false;
+        }
+
+        if (ok) {
+            commitJob(deleteJob, db, timeoutMs);
+        } else {
+            deleteJob.checkQuorum();
+            if (deleteJob.getState() != DeleteState.UN_QUORUM) {
+                long nowQuorumTimeMs = System.currentTimeMillis();
+                long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                // if job's state is finished or stay in quorum_finished for long time, try to commit it.
+                try {
+                    while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                        deleteJob.checkQuorum();
+                        Thread.sleep(1000);
+                        nowQuorumTimeMs = System.currentTimeMillis();
+                    }
+                    commitJob(deleteJob, db, timeoutMs);
+                } catch (InterruptedException e) {
+                }
+            } else {
+                List<Entry<Long, Long>> unfinishedMarks = countDownLatch.getLeftMarks();
+                // only show at most 5 results
+                List<Entry<Long, Long>> subList = unfinishedMarks.subList(0, Math.min(unfinishedMarks.size(), 5));
+                String errMsg = "Unfinished replicas:" + Joiner.on(", ").join(subList);
+                LOG.warn("delete job timeout: {}, {}", transactionId, errMsg);
+                cancelJob(deleteJob, "delete job timeout");
+                throw new DdlException("failed to delete replicas from job: {}, {}, transactionId, errMsg");
+            }
+        }
+    }
+
+    private void commitJob(DeleteJob job, Database db, long timeout) {
+        long transactionId = job.getTransactionId();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : job.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            boolean isSuccess = globalTransactionMgr.commitAndPublishTransaction(db, transactionId, tabletCommitInfos, timeout);
+            if (!isSuccess) {
+                cancelJob(job, "delete timeout when waiting transaction commit");
 
 Review comment:
   Done

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407065963
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,170 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    private long signature;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
+    private Set<PushTask> pushTasks;
+    private DeleteInfo deleteInfo;
+
+    public DeleteJob(long transactionId, DeleteInfo deleteInfo) {
+        this.signature = transactionId;
+        this.deleteInfo = deleteInfo;
+        totalTablets = Sets.newHashSet();
+        finishedTablets = Sets.newHashSet();
+        quorumTablets = Sets.newHashSet();
+        tabletDeleteInfoMap = Maps.newConcurrentMap();
+        pushTasks = Sets.newHashSet();
+        state = DeleteState.UN_QUORUM;
+    }
+
+    public void checkQuorum() throws DdlException {
 
 Review comment:
   Add comment for this method.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r406811099
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
+                    }
+                    // remove task isQuorum or isCanceled
+                    removeTask(task);
+                } catch (InterruptedException e) {
+                    // do nothing
+                }
+            }
+        }
+    }
+
+    private void commitTask(DeleteTask task, Database db) {
+        long transactionId = task.getSignature();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        db.writeLock();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : task.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            globalTransactionMgr.commitTransaction(db.getId(), transactionId, tabletCommitInfos);
+        } catch (UserException e) {
+            LOG.warn("errors while commit delete, transaction [{}], reason is {}",
+                    transactionState.getTransactionId(),  e);
+            cancelTask(task, transactionState.getReason());
+        } finally {
+            db.writeUnlock();
+        }
+    }
+
+    public void removeTask(DeleteTask task) {
+        task.unJoin();
+        writeLock();
+        try {
+            long signature = task.getSignature();
+            if (idToDeleteTask.containsKey(signature)) {
+                idToDeleteTask.remove(signature);
+            }
+            for (PushTask pushTask : task.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            if (task.isQuorum()) {
+                DeleteInfo deleteInfo = task.getDeleteInfo();
+                long dbId = deleteInfo.getDbId();
+                if (dbToDeleteInfos.containsKey(dbId)) {
+                    dbToDeleteInfos.get(dbId).add(deleteInfo);
+                } else {
+                    List<DeleteInfo> deleteInfoList = Lists.newArrayList();
+                    deleteInfoList.add(deleteInfo);
+                    dbToDeleteInfos.put(dbId, deleteInfoList);
+                }
+                Catalog.getInstance().getEditLog().logFinishSyncDelete(deleteInfo);
 
 Review comment:
   I will put it in afterVisible() in interface AbstractTxnStateChangeCallback instead.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408084517
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,185 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    // jobId(listenerId). use in beginTransaction to callback function
+    private long id;
+    // transaction id.
+    private long signature;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
+    private Set<PushTask> pushTasks;
+    private DeleteInfo deleteInfo;
+
+    public DeleteJob(long id, long transactionId, DeleteInfo deleteInfo) {
+        this.id = id;
+        this.signature = transactionId;
+        this.deleteInfo = deleteInfo;
+        totalTablets = Sets.newHashSet();
+        finishedTablets = Sets.newHashSet();
+        quorumTablets = Sets.newHashSet();
+        tabletDeleteInfoMap = Maps.newConcurrentMap();
+        pushTasks = Sets.newHashSet();
+        state = DeleteState.UN_QUORUM;
+    }
+
+    /**
+     * check and update if this job's state is QUORUM_FINISHED or FINISHED
+     * The meaning of state:
+     * QUORUM_FINISHED: For each tablet there are more than half of its replicas have been finished
+     * FINISHED: All replicas of this jobs have finished
+     */
+    public void checkAndUpdateQuorum() throws MetaNotFoundException {
+        long dbId = deleteInfo.getDbId();
+        long tableId = deleteInfo.getTableId();
+        long partitionId = deleteInfo.getPartitionId();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            throw new MetaNotFoundException("can not find database "+ dbId +" when commit delete");
+        }
+
+        short replicaNum = -1;
+        db.readLock();
+        try {
+            OlapTable table = (OlapTable) db.getTable(tableId);
+            if (table == null) {
+                throw new MetaNotFoundException("can not find table "+ tableId +" when commit delete");
+            }
+            replicaNum = table.getPartitionInfo().getReplicationNum(partitionId);
+        } finally {
+            db.readUnlock();
+        }
+
+        short quorumNum = (short) (replicaNum / 2 + 1);
+        for (TabletDeleteInfo tDeleteInfo : getTabletDeleteInfo()) {
+            if (tDeleteInfo.getFinishedReplicas().size() == replicaNum) {
+                finishedTablets.add(tDeleteInfo.getTabletId());
+            }
+            if (tDeleteInfo.getFinishedReplicas().size() >= quorumNum) {
+                quorumTablets.add(tDeleteInfo.getTabletId());
+            }
+        }
+        LOG.info("check delete job quorum, transaction id: {}, total tablets: {}, quorum tablets: {},",
+                signature, totalTablets.size(), quorumTablets.size());
+
+        if (finishedTablets.containsAll(totalTablets)) {
+            setState(DeleteState.FINISHED);
+        } else if (quorumTablets.containsAll(totalTablets)) {
+            setState(DeleteState.QUORUM_FINISHED);
+        }
+    }
+
+    public void setState(DeleteState state) {
+        this.state = state;
+    }
+
+    public DeleteState getState() {
+        return this.state;
+    }
+
+    public boolean addTablet(long tabletId) {
+        return totalTablets.add(tabletId);
+    }
+
+    public boolean addPushTask(PushTask pushTask) {
+        return pushTasks.add(pushTask);
+    }
+
+    public boolean addFinishedReplica(long tabletId, Replica replica) {
+        TabletDeleteInfo tDeleteInfo = tabletDeleteInfoMap.get(tabletId);
 
 Review comment:
   After changing `tabletDeleteInfoMap` to ConncurrentHashMap, you should use `putIfAbsent` to perform the atomic operation here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408761900
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/qe/ShowExecutor.java
 ##########
 @@ -1053,8 +1054,8 @@ private void handleShowDelete() throws AnalysisException {
         }
         long dbId = db.getId();
 
-        Load load = catalog.getLoadInstance();
-        List<List<Comparable>> deleteInfos = load.getDeleteInfosByDb(dbId, true);
+        DeleteHandler deleteHandler = catalog.getDeleteHandler();
 
 Review comment:
   done

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r405617415
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
+                    }
+                    // remove task isQuorum or isCanceled
+                    removeTask(task);
+                } catch (InterruptedException e) {
+                    // do nothing
+                }
+            }
+        }
+    }
+
+    private void commitTask(DeleteTask task, Database db) {
+        long transactionId = task.getSignature();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        db.writeLock();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : task.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            globalTransactionMgr.commitTransaction(db.getId(), transactionId, tabletCommitInfos);
+        } catch (UserException e) {
+            LOG.warn("errors while commit delete, transaction [{}], reason is {}",
+                    transactionState.getTransactionId(),  e);
+            cancelTask(task, transactionState.getReason());
+        } finally {
+            db.writeUnlock();
+        }
+    }
+
+    public void removeTask(DeleteTask task) {
+        task.unJoin();
+        writeLock();
+        try {
+            long signature = task.getSignature();
+            if (idToDeleteTask.containsKey(signature)) {
+                idToDeleteTask.remove(signature);
+            }
+            for (PushTask pushTask : task.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            if (task.isQuorum()) {
+                DeleteInfo deleteInfo = task.getDeleteInfo();
+                long dbId = deleteInfo.getDbId();
+                if (dbToDeleteInfos.containsKey(dbId)) {
+                    dbToDeleteInfos.get(dbId).add(deleteInfo);
+                } else {
+                    List<DeleteInfo> deleteInfoList = Lists.newArrayList();
+                    deleteInfoList.add(deleteInfo);
+                    dbToDeleteInfos.put(dbId, deleteInfoList);
+                }
+                Catalog.getInstance().getEditLog().logFinishSyncDelete(deleteInfo);
 
 Review comment:
   You can not write log here, because the transaction may commit fail

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408762387
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,627 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.FeConstants;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    private enum CancelType {
+        METADATA_MISSING,
+        TIMEOUT,
+        COMMIT_FAIL,
+        UNKNOWN
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        try {
+            MarkedCountDownLatch<Long, Long> countDownLatch;
+            long transactionId = -1;
+            db.readLock();
+            try {
+                Table table = db.getTable(tableName);
+                if (table == null) {
+                    throw new DdlException("Table does not exist. name: " + tableName);
+                }
+
+                if (table.getType() != Table.TableType.OLAP) {
+                    throw new DdlException("Not olap type table. type: " + table.getType().name());
+                }
+                OlapTable olapTable = (OlapTable) table;
+
+                if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                    throw new DdlException("Table's state is not normal: " + tableName);
+                }
+
+                if (partitionName == null) {
+                    if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                        throw new DdlException("This is a range partitioned table."
+                                + " You should specify partition in delete stmt");
+                    } else {
+                        // this is a unpartitioned table, use table name as partition name
+                        partitionName = olapTable.getName();
+                    }
+                }
+
+                Partition partition = olapTable.getPartition(partitionName);
+                if (partition == null) {
+                    throw new DdlException("Partition does not exist. name: " + partitionName);
+                }
+
+                List<String> deleteConditions = Lists.newArrayList();
+
+                // pre check
+                checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+                // generate label
+                String label = "delete_" + UUID.randomUUID();
+                //generate jobId
+                long jobId = Catalog.getCurrentCatalog().getNextId();
+                // begin txn here and generate txn id
+                transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                        Lists.newArrayList(table.getId()), label, null, "FE: " + FrontendOptions.getLocalHostAddress(),
+                        TransactionState.LoadJobSourceType.FRONTEND, jobId, Config.stream_load_default_timeout_second);
+
+                DeleteInfo deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                        partition.getId(), partitionName,
+                        -1, 0, deleteConditions);
+                deleteJob = new DeleteJob(jobId, transactionId, deleteInfo);
+                idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+
+                Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+                // task sent to be
+                AgentBatchTask batchTask = new AgentBatchTask();
+                // count total replica num
+                int totalReplicaNum = 0;
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    for (Tablet tablet : index.getTablets()) {
+                        totalReplicaNum += tablet.getReplicas().size();
+                    }
+                }
+                countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    long indexId = index.getId();
+                    int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                    for (Tablet tablet : index.getTablets()) {
+                        long tabletId = tablet.getId();
+
+                        // set push type
+                        TPushType type = TPushType.DELETE;
+
+                        for (Replica replica : tablet.getReplicas()) {
+                            long replicaId = replica.getId();
+                            long backendId = replica.getBackendId();
+                            countDownLatch.addMark(backendId, tabletId);
+
+                            // create push task for each replica
+                            PushTask pushTask = new PushTask(null,
+                                    replica.getBackendId(), db.getId(), olapTable.getId(),
+                                    partition.getId(), indexId,
+                                    tabletId, replicaId, schemaHash,
+                                    -1, 0, "", -1, 0,
+                                    -1, type, conditions,
+                                    true, TPriority.NORMAL,
+                                    TTaskType.REALTIME_PUSH,
+                                    transactionId,
+                                    Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                            pushTask.setIsSchemaChanging(false);
+                            pushTask.setCountDownLatch(countDownLatch);
+
+                            if (AgentTaskQueue.addTask(pushTask)) {
+                                batchTask.addTask(pushTask);
+                                deleteJob.addPushTask(pushTask);
+                                deleteJob.addTablet(tabletId);
+                            }
+                        }
+                    }
+                }
+
+                // submit push tasks
+                if (batchTask.getTaskNum() > 0) {
+                    AgentTaskExecutor.submit(batchTask);
+                }
+
+            } catch (Throwable t) {
+                LOG.warn("error occurred during delete process", t);
+                // if transaction has been begun, need to abort it
+                if (Catalog.getCurrentGlobalTransactionMgr().getTransactionState(transactionId) != null) {
+                    cancelJob(deleteJob, CancelType.UNKNOWN, t.getMessage());
+                }
+                throw new DdlException(t.getMessage(), t);
+            } finally {
+                db.readUnlock();
+            }
+
+            long timeoutMs = deleteJob.getTimeoutMs();
+            LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+            boolean ok = false;
+            try {
+                ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+            } catch (InterruptedException e) {
+                LOG.warn("InterruptedException: ", e);
+                ok = false;
+            }
+
+            if (!ok) {
+                try {
+                    deleteJob.checkAndUpdateQuorum();
+                } catch (MetaNotFoundException e) {
+                    cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                    throw new DdlException(e.getMessage(), e);
+                }
+                DeleteState state = deleteJob.getState();
+                switch (state) {
+                    case UN_QUORUM:
+                        List<Entry<Long, Long>> unfinishedMarks = countDownLatch.getLeftMarks();
+                        // only show at most 5 results
+                        List<Entry<Long, Long>> subList = unfinishedMarks.subList(0, Math.min(unfinishedMarks.size(), 5));
+                        String errMsg = "Unfinished replicas:" + Joiner.on(", ").join(subList);
+                        LOG.warn("delete job timeout: transactionId {}, {}", transactionId, errMsg);
+                        cancelJob(deleteJob, CancelType.TIMEOUT, "delete job timeout");
+                        throw new DdlException("failed to delete replicas from job: " + transactionId + ", " + errMsg);
+                    case QUORUM_FINISHED:
+                    case FINISHED:
+                        try {
+                            long nowQuorumTimeMs = System.currentTimeMillis();
+                            long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                            // if job's state is quorum_finished then wait for a period of time and commit it.
+                            while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                                deleteJob.checkAndUpdateQuorum();
+                                Thread.sleep(1000);
+                                nowQuorumTimeMs = System.currentTimeMillis();
+                            }
+                        } catch (MetaNotFoundException e) {
+                            cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        } catch (InterruptedException e) {
+                            cancelJob(deleteJob, CancelType.UNKNOWN, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        }
+                        commitJob(deleteJob, db, timeoutMs);
+                        break;
+                    default:
+                        Preconditions.checkState(false, "wrong delete job state: " + state.name());
+                        break;
+                }
+            } else {
+                commitJob(deleteJob, db, timeoutMs);
+            }
+        } finally {
+            if (!FeConstants.runningUnitTest) {
+                clearJob(deleteJob);
+            }
+        }
+    }
+
+    private void commitJob(DeleteJob job, Database db, long timeoutMs) throws DdlException {
+        TransactionStatus status = null;
+        try {
+            unprotectedCommitJob(job, db, timeoutMs);
+            status = Catalog.getCurrentGlobalTransactionMgr().
+                    getTransactionState(job.getTransactionId()).getTransactionStatus();
+        } catch (UserException e) {
+            cancelJob(job, CancelType.COMMIT_FAIL, e.getMessage());
+            throw new DdlException(e.getMessage(), e);
+        }
+
+        switch (status) {
+            case COMMITTED:
+                // Although publish is unfinished we should tell user that commit already success.
+                throw new DdlException("delete job is committed but may be taking effect later, transactionId: " + job.getTransactionId());
+            case VISIBLE:
+                break;
+            default:
+                Preconditions.checkState(false, "wrong transaction status: " + status.name());
+                break;
+        }
+    }
+
+    /**
+     * unprotected commit delete job
+     * return true when successfully commit and publish
+     * return false when successfully commit but publish unfinished.
+     * A UserException thrown if both commit and publish failed.
+     * @param job
+     * @param db
+     * @param timeoutMs
+     * @return
+     * @throws UserException
+     */
+    private boolean unprotectedCommitJob(DeleteJob job, Database db, long timeoutMs) throws UserException {
+        long transactionId = job.getTransactionId();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+        for (TabletDeleteInfo tDeleteInfo : job.getTabletDeleteInfo()) {
+            for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                // the inverted index contains rolling up replica
+                Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                if (tabletId == null) {
+                    LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                    continue;
+                }
+                tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+            }
+        }
+        return globalTransactionMgr.commitAndPublishTransaction(db, transactionId, tabletCommitInfos, timeoutMs);
+    }
+
+    /**
+     * This method should always be called in the end of the delete process to clean the job.
+     * Better put it in finally block.
+     * @param job
+     */
+    private void clearJob(DeleteJob job) {
+        if (job != null) {
+            long signature = job.getTransactionId();
+            if (idToDeleteJob.containsKey(signature)) {
+                idToDeleteJob.remove(signature);
+            }
+            for (PushTask pushTask : job.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().removeCallback(job.getId());
+        }
+    }
+
+    public void recordFinishedJob(DeleteJob job) {
+        if (job != null) {
+            long dbId = job.getDeleteInfo().getDbId();
+            LOG.info("record finished deleteJob, transactionId {}, dbId {}",
+                    job.getTransactionId(), dbId);
+            List<DeleteInfo> deleteInfoList = dbToDeleteInfos.get(dbId);
+            if (deleteInfoList == null) {
+                deleteInfoList = Lists.newArrayList();
+                dbToDeleteInfos.put(dbId, deleteInfoList);
+            }
+            deleteInfoList.add(job.getDeleteInfo());
+        }
+    }
+
+    public boolean cancelJob(DeleteJob job, CancelType cancelType, String reason) {
+        LOG.info("start to cancel delete job, transactionId: {}, cancelType: {}", job.getTransactionId(), cancelType.name());
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        try {
+            if (job != null) {
+                globalTransactionMgr.abortTransaction(job.getTransactionId(), reason);
+            }
+        } catch (Exception e) {
+            TransactionState state = globalTransactionMgr.getTransactionState(job.getTransactionId());
+            if (state == null) {
+                LOG.warn("cancel delete job failed because txn not found, transactionId: {}", job.getTransactionId());
+            } else if (state.getTransactionStatus() == TransactionStatus.COMMITTED || state.getTransactionStatus() == TransactionStatus.VISIBLE) {
+                LOG.warn("cancel delete job {} failed because it has been committed, transactionId: {}", job.getTransactionId());
+            } else {
+                LOG.warn("errors while abort transaction", e);
+            }
+            return false;
+        }
+        return true;
+    }
+
+    public DeleteJob getDeleteJob(long transactionId) {
+        return idToDeleteJob.get(transactionId);
+    }
+
+    private void checkDeleteV2(OlapTable table, Partition partition, List<Predicate> conditions, List<String> deleteConditions, boolean preCheck)
+            throws DdlException {
+
+        // check partition state
+        Partition.PartitionState state = partition.getState();
+        if (state != Partition.PartitionState.NORMAL) {
+            // ErrorReport.reportDdlException(ErrorCode.ERR_BAD_PARTITION_STATE, partition.getName(), state.name());
+            throw new DdlException("Partition[" + partition.getName() + "]' state is not NORMAL: " + state.name());
+        }
+
+        // check condition column is key column and condition value
+        Map<String, Column> nameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+        for (Column column : table.getBaseSchema()) {
+            nameToColumn.put(column.getName(), column);
+        }
+        for (Predicate condition : conditions) {
+            SlotRef slotRef = null;
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                slotRef = (SlotRef) binaryPredicate.getChild(0);
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                slotRef = (SlotRef) isNullPredicate.getChild(0);
+            }
+            String columnName = slotRef.getColumnName();
+            if (!nameToColumn.containsKey(columnName)) {
+                ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, table.getName());
+            }
+
+            Column column = nameToColumn.get(columnName);
+            if (!column.isKey()) {
+                // ErrorReport.reportDdlException(ErrorCode.ERR_NOT_KEY_COLUMN, columnName);
+                throw new DdlException("Column[" + columnName + "] is not key column");
+            }
+
+            if (condition instanceof BinaryPredicate) {
+                String value = null;
+                try {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    value = ((LiteralExpr) binaryPredicate.getChild(1)).getStringValue();
+                    LiteralExpr.create(value, Type.fromPrimitiveType(column.getDataType()));
+                } catch (AnalysisException e) {
+                    // ErrorReport.reportDdlException(ErrorCode.ERR_INVALID_VALUE, value);
+                    throw new DdlException("Invalid column value[" + value + "]");
+                }
+            }
+
+            // set schema column name
+            slotRef.setCol(column.getName());
+        }
+        Map<Long, List<Column>> indexIdToSchema = table.getIndexIdToSchema();
+        for (MaterializedIndex index : partition.getMaterializedIndices(MaterializedIndex.IndexExtState.VISIBLE)) {
+            // check table has condition column
+            Map<String, Column> indexColNameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+            for (Column column : indexIdToSchema.get(index.getId())) {
+                indexColNameToColumn.put(column.getName(), column);
+            }
+            String indexName = table.getIndexNameById(index.getId());
+            for (Predicate condition : conditions) {
+                String columnName = null;
+                if (condition instanceof BinaryPredicate) {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    columnName = ((SlotRef) binaryPredicate.getChild(0)).getColumnName();
+                } else if (condition instanceof IsNullPredicate) {
+                    IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                    columnName = ((SlotRef) isNullPredicate.getChild(0)).getColumnName();
+                }
+                Column column = indexColNameToColumn.get(columnName);
+                if (column == null) {
+                    ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, indexName);
+                }
+
+                if (table.getKeysType() == KeysType.DUP_KEYS && !column.isKey()) {
+                    throw new DdlException("Column[" + columnName + "] is not key column in index[" + indexName + "]");
+                }
+            }
+        }
+
+        if (deleteConditions == null) {
+            return;
+        }
+
+        // save delete conditions
+        for (Predicate condition : conditions) {
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                SlotRef slotRef = (SlotRef) binaryPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName).append(" ").append(binaryPredicate.getOp().name()).append(" \"")
+                        .append(((LiteralExpr) binaryPredicate.getChild(1)).getStringValue()).append("\"");
+                deleteConditions.add(sb.toString());
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                SlotRef slotRef = (SlotRef) isNullPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName);
+                if (isNullPredicate.isNotNull()) {
+                    sb.append(" IS NOT NULL");
+                } else {
+                    sb.append(" IS NULL");
+                }
+                deleteConditions.add(sb.toString());
+            }
+        }
+    }
+
+    // show delete stmt
+    public List<List<Comparable>> getDeleteInfosByDb(long dbId, boolean forUser) {
+        LinkedList<List<Comparable>> infos = new LinkedList<List<Comparable>>();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            return infos;
+        }
+
+        String dbName = db.getFullName();
+        List<DeleteInfo> deleteInfos = dbToDeleteInfos.get(dbId);
+        if (deleteInfos == null) {
+            return infos;
+        }
+
+        for (DeleteInfo deleteInfo : deleteInfos) {
+
+            if (!Catalog.getCurrentCatalog().getAuth().checkTblPriv(ConnectContext.get(), dbName,
+                    deleteInfo.getTableName(),
+                    PrivPredicate.LOAD)) {
+                continue;
+            }
+
+
+            List<Comparable> info = Lists.newArrayList();
+            if (!forUser) {
+                info.add(-1L);
+                info.add(deleteInfo.getTableId());
+            }
+            info.add(deleteInfo.getTableName());
+            if (!forUser) {
+                info.add(deleteInfo.getPartitionId());
+            }
+            info.add(deleteInfo.getPartitionName());
+
+            info.add(TimeUtils.longToTimeString(deleteInfo.getCreateTimeMs()));
+            String conds = Joiner.on(", ").join(deleteInfo.getDeleteConditions());
+            info.add(conds);
+
+            if (!forUser) {
+                info.add(deleteInfo.getPartitionVersion());
+                info.add(deleteInfo.getPartitionVersionHash());
+            }
+            // for loading state, should not display loading, show deleting instead
+//                if (loadJob.getState() == LoadJob.JobState.LOADING) {
+//                    info.add("DELETING");
+//                } else {
+//                    info.add(loadJob.getState().name());
+//                }
+            info.add("FINISHED");
+            infos.add(info);
+        }
+        // sort by createTimeMs
+        int sortIndex;
+        if (!forUser) {
+            sortIndex = 5;
+        } else {
+            sortIndex = 2;
+        }
+        ListComparator<List<Comparable>> comparator = new ListComparator<List<Comparable>>(sortIndex);
+        Collections.sort(infos, comparator);
+        return infos;
+    }
+
+    public void replayDelete(DeleteInfo deleteInfo, Catalog catalog) {
+        Database db = catalog.getDb(deleteInfo.getDbId());
+        db.writeLock();
 
 Review comment:
   removed it

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408101529
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,627 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.FeConstants;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    private enum CancelType {
+        METADATA_MISSING,
+        TIMEOUT,
+        COMMIT_FAIL,
+        UNKNOWN
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        try {
+            MarkedCountDownLatch<Long, Long> countDownLatch;
+            long transactionId = -1;
+            db.readLock();
+            try {
+                Table table = db.getTable(tableName);
+                if (table == null) {
+                    throw new DdlException("Table does not exist. name: " + tableName);
+                }
+
+                if (table.getType() != Table.TableType.OLAP) {
+                    throw new DdlException("Not olap type table. type: " + table.getType().name());
+                }
+                OlapTable olapTable = (OlapTable) table;
+
+                if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                    throw new DdlException("Table's state is not normal: " + tableName);
+                }
+
+                if (partitionName == null) {
+                    if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                        throw new DdlException("This is a range partitioned table."
+                                + " You should specify partition in delete stmt");
+                    } else {
+                        // this is a unpartitioned table, use table name as partition name
+                        partitionName = olapTable.getName();
+                    }
+                }
+
+                Partition partition = olapTable.getPartition(partitionName);
+                if (partition == null) {
+                    throw new DdlException("Partition does not exist. name: " + partitionName);
+                }
+
+                List<String> deleteConditions = Lists.newArrayList();
+
+                // pre check
+                checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+                // generate label
+                String label = "delete_" + UUID.randomUUID();
+                //generate jobId
+                long jobId = Catalog.getCurrentCatalog().getNextId();
+                // begin txn here and generate txn id
+                transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                        Lists.newArrayList(table.getId()), label, null, "FE: " + FrontendOptions.getLocalHostAddress(),
+                        TransactionState.LoadJobSourceType.FRONTEND, jobId, Config.stream_load_default_timeout_second);
+
+                DeleteInfo deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                        partition.getId(), partitionName,
+                        -1, 0, deleteConditions);
+                deleteJob = new DeleteJob(jobId, transactionId, deleteInfo);
+                idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+
+                Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+                // task sent to be
+                AgentBatchTask batchTask = new AgentBatchTask();
+                // count total replica num
+                int totalReplicaNum = 0;
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    for (Tablet tablet : index.getTablets()) {
+                        totalReplicaNum += tablet.getReplicas().size();
+                    }
+                }
+                countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    long indexId = index.getId();
+                    int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                    for (Tablet tablet : index.getTablets()) {
+                        long tabletId = tablet.getId();
+
+                        // set push type
+                        TPushType type = TPushType.DELETE;
+
+                        for (Replica replica : tablet.getReplicas()) {
+                            long replicaId = replica.getId();
+                            long backendId = replica.getBackendId();
+                            countDownLatch.addMark(backendId, tabletId);
+
+                            // create push task for each replica
+                            PushTask pushTask = new PushTask(null,
+                                    replica.getBackendId(), db.getId(), olapTable.getId(),
+                                    partition.getId(), indexId,
+                                    tabletId, replicaId, schemaHash,
+                                    -1, 0, "", -1, 0,
+                                    -1, type, conditions,
+                                    true, TPriority.NORMAL,
+                                    TTaskType.REALTIME_PUSH,
+                                    transactionId,
+                                    Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                            pushTask.setIsSchemaChanging(false);
+                            pushTask.setCountDownLatch(countDownLatch);
+
+                            if (AgentTaskQueue.addTask(pushTask)) {
+                                batchTask.addTask(pushTask);
+                                deleteJob.addPushTask(pushTask);
+                                deleteJob.addTablet(tabletId);
+                            }
+                        }
+                    }
+                }
+
+                // submit push tasks
+                if (batchTask.getTaskNum() > 0) {
+                    AgentTaskExecutor.submit(batchTask);
+                }
+
+            } catch (Throwable t) {
+                LOG.warn("error occurred during delete process", t);
+                // if transaction has been begun, need to abort it
+                if (Catalog.getCurrentGlobalTransactionMgr().getTransactionState(transactionId) != null) {
+                    cancelJob(deleteJob, CancelType.UNKNOWN, t.getMessage());
+                }
+                throw new DdlException(t.getMessage(), t);
+            } finally {
+                db.readUnlock();
+            }
+
+            long timeoutMs = deleteJob.getTimeoutMs();
+            LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+            boolean ok = false;
+            try {
+                ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+            } catch (InterruptedException e) {
+                LOG.warn("InterruptedException: ", e);
+                ok = false;
+            }
+
+            if (!ok) {
+                try {
+                    deleteJob.checkAndUpdateQuorum();
+                } catch (MetaNotFoundException e) {
+                    cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                    throw new DdlException(e.getMessage(), e);
+                }
+                DeleteState state = deleteJob.getState();
+                switch (state) {
+                    case UN_QUORUM:
+                        List<Entry<Long, Long>> unfinishedMarks = countDownLatch.getLeftMarks();
+                        // only show at most 5 results
+                        List<Entry<Long, Long>> subList = unfinishedMarks.subList(0, Math.min(unfinishedMarks.size(), 5));
+                        String errMsg = "Unfinished replicas:" + Joiner.on(", ").join(subList);
+                        LOG.warn("delete job timeout: transactionId {}, {}", transactionId, errMsg);
+                        cancelJob(deleteJob, CancelType.TIMEOUT, "delete job timeout");
+                        throw new DdlException("failed to delete replicas from job: " + transactionId + ", " + errMsg);
+                    case QUORUM_FINISHED:
+                    case FINISHED:
+                        try {
+                            long nowQuorumTimeMs = System.currentTimeMillis();
+                            long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                            // if job's state is quorum_finished then wait for a period of time and commit it.
+                            while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                                deleteJob.checkAndUpdateQuorum();
+                                Thread.sleep(1000);
+                                nowQuorumTimeMs = System.currentTimeMillis();
+                            }
+                        } catch (MetaNotFoundException e) {
+                            cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        } catch (InterruptedException e) {
+                            cancelJob(deleteJob, CancelType.UNKNOWN, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        }
+                        commitJob(deleteJob, db, timeoutMs);
+                        break;
+                    default:
+                        Preconditions.checkState(false, "wrong delete job state: " + state.name());
+                        break;
+                }
+            } else {
+                commitJob(deleteJob, db, timeoutMs);
+            }
+        } finally {
+            if (!FeConstants.runningUnitTest) {
+                clearJob(deleteJob);
+            }
+        }
+    }
+
+    private void commitJob(DeleteJob job, Database db, long timeoutMs) throws DdlException {
+        TransactionStatus status = null;
+        try {
+            unprotectedCommitJob(job, db, timeoutMs);
+            status = Catalog.getCurrentGlobalTransactionMgr().
+                    getTransactionState(job.getTransactionId()).getTransactionStatus();
+        } catch (UserException e) {
+            cancelJob(job, CancelType.COMMIT_FAIL, e.getMessage());
+            throw new DdlException(e.getMessage(), e);
+        }
+
+        switch (status) {
+            case COMMITTED:
+                // Although publish is unfinished we should tell user that commit already success.
+                throw new DdlException("delete job is committed but may be taking effect later, transactionId: " + job.getTransactionId());
+            case VISIBLE:
+                break;
+            default:
+                Preconditions.checkState(false, "wrong transaction status: " + status.name());
+                break;
+        }
+    }
+
+    /**
+     * unprotected commit delete job
+     * return true when successfully commit and publish
+     * return false when successfully commit but publish unfinished.
+     * A UserException thrown if both commit and publish failed.
+     * @param job
+     * @param db
+     * @param timeoutMs
+     * @return
+     * @throws UserException
+     */
+    private boolean unprotectedCommitJob(DeleteJob job, Database db, long timeoutMs) throws UserException {
+        long transactionId = job.getTransactionId();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+        for (TabletDeleteInfo tDeleteInfo : job.getTabletDeleteInfo()) {
+            for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                // the inverted index contains rolling up replica
+                Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                if (tabletId == null) {
+                    LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                    continue;
+                }
+                tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+            }
+        }
+        return globalTransactionMgr.commitAndPublishTransaction(db, transactionId, tabletCommitInfos, timeoutMs);
+    }
+
+    /**
+     * This method should always be called in the end of the delete process to clean the job.
+     * Better put it in finally block.
+     * @param job
+     */
+    private void clearJob(DeleteJob job) {
+        if (job != null) {
+            long signature = job.getTransactionId();
+            if (idToDeleteJob.containsKey(signature)) {
+                idToDeleteJob.remove(signature);
+            }
+            for (PushTask pushTask : job.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().removeCallback(job.getId());
+        }
+    }
+
+    public void recordFinishedJob(DeleteJob job) {
+        if (job != null) {
+            long dbId = job.getDeleteInfo().getDbId();
+            LOG.info("record finished deleteJob, transactionId {}, dbId {}",
+                    job.getTransactionId(), dbId);
+            List<DeleteInfo> deleteInfoList = dbToDeleteInfos.get(dbId);
+            if (deleteInfoList == null) {
+                deleteInfoList = Lists.newArrayList();
+                dbToDeleteInfos.put(dbId, deleteInfoList);
+            }
+            deleteInfoList.add(job.getDeleteInfo());
+        }
+    }
+
+    public boolean cancelJob(DeleteJob job, CancelType cancelType, String reason) {
 
 Review comment:
   This method return boolean, but you never use it.
   I think return true means cancel succeed(txn failed), and return false means cancel failed(txn succeed).
   And the caller should use this return value to decide whether to return user success or failure.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r406809237
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
+                    }
+                    // remove task isQuorum or isCanceled
+                    removeTask(task);
+                } catch (InterruptedException e) {
+                    // do nothing
+                }
+            }
+        }
+    }
+
+    private void commitTask(DeleteTask task, Database db) {
+        long transactionId = task.getSignature();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        db.writeLock();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : task.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            globalTransactionMgr.commitTransaction(db.getId(), transactionId, tabletCommitInfos);
 
 Review comment:
   done

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r406809075
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
 
 Review comment:
   done

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r406810458
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
 
 Review comment:
   fixed

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408082066
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/persist/EditLog.java
 ##########
 @@ -378,8 +379,8 @@ public static void loadJournal(Catalog catalog, JournalEntity journal) {
                     break;
                 case OperationType.OP_FINISH_SYNC_DELETE: {
                     DeleteInfo info = (DeleteInfo) journal.getData();
-                    Load load = catalog.getLoadInstance();
-                    load.replayDelete(info, catalog);
+                    DeleteHandler deleteHandler = catalog.getDeleteHandler();
 
 Review comment:
   You can not use the origin `OP_FINISH_SYNC_DELETE`, because they are different operations.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408763091
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,549 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        MarkedCountDownLatch<Long, Long> countDownLatch;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+            deleteJob = new DeleteJob(transactionId, deleteInfo);
+            idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
 
 Review comment:
   added a new try finally block surround outside this logic

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r405606913
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,684 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteTask;
+import org.apache.doris.task.MasterTaskExecutor;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteTask
+    private Map<Long, DeleteTask> idToDeleteTask;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    private MasterTaskExecutor executor;
+
+    private BlockingQueue<DeleteTask> queue;
+
+    private DeleteTaskChecker checker;
+
+    private ReentrantReadWriteLock lock;
+
+    public void readLock() {
+        lock.readLock().lock();
+    }
+
+    public void readUnlock() {
+        lock.readLock().unlock();
+    }
+
+    private void writeLock() {
+        lock.writeLock().lock();
+    }
+
+    private void writeUnlock() {
+        lock.writeLock().unlock();
+    }
+
+
+    public DeleteHandler() {
+        idToDeleteTask = Maps.newHashMap();
+        dbToDeleteInfos = Maps.newHashMap();
+        executor = new MasterTaskExecutor(Config.delete_thread_num);
+        queue = new LinkedBlockingQueue(Config.delete_thread_num);
+        lock = new ReentrantReadWriteLock(true);
+        // start checker
+        checker = new DeleteTaskChecker(queue);
+        checker.start();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteTask deleteTask = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                    .getTransactionState(transactionId);
+            if (state == null) {
+                throw new DdlException("begin transaction failed, cancel delete");
+            }
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+
+            // task in fe
+            deleteTask = new DeleteTask(transactionId, deleteInfo);
+
+            writeLock();
+            try {
+                idToDeleteTask.put(transactionId, deleteTask);
+            } finally {
+                writeUnlock();
+            }
+
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    Set<Long> allReplicas = new HashSet<Long>();
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        allReplicas.add(replicaId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteTask.addTablet(tabletId);
+                            deleteTask.addPushTask(pushTask);
+                        }
+
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+                queue.put(deleteTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeout = deleteTask.getTimeout();
+        LOG.info("waiting delete task finish, signature: {}, timeout: {}", transactionId, timeout);
+        // wait until delete task finish or timeout
+        deleteTask.join(timeout);
+        if (deleteTask.isQuorum()) {
+            commitTask(deleteTask, db);
+        } else {
+            boolean isSuccess = cancelTask(deleteTask, "delete task timeout");
+            if (isSuccess) {
+                throw new DdlException("timeout when waiting delete");
+            }
+        }
+
+        // wait until transaction state become visible
+        afterCommit(deleteTask, db, timeout);
+    }
+
+    private void afterCommit(DeleteTask deleteTask, Database db, long leftTime) throws DdlException {
+        try {
+            long startDeleteTime = System.currentTimeMillis();
+            long transactionId = deleteTask.getSignature();
+            while (true) {
+                db.writeLock();
+                try {
+                    // check if the job is aborted in transaction manager
+                    TransactionState state = Catalog.getCurrentGlobalTransactionMgr()
+                            .getTransactionState(transactionId);
+                    if (state == null) {
+                        LOG.warn("cancel delete, transactionId {},  because could not find transaction state", transactionId);
+                        cancelTask(deleteTask,"transaction state lost");
+                        return;
+                    }
+                    TransactionStatus status = state.getTransactionStatus();
+                    switch (status) {
+                        case ABORTED:
+                            cancelTask(deleteTask,"delete transaction is aborted in transaction manager [" + state + "]");
+                            return;
+                        case COMMITTED:
+                            LOG.debug("delete task is already committed, just wait it to be visible, transactionId {}, transaction state {}", transactionId, state);
+                            return;
+                        case VISIBLE:
+                            LOG.debug("delete committed, transactionId: {}, transaction state {}", transactionId, state);
+                            removeTask(deleteTask);
+                            return;
+                    }
+                    if (leftTime < System.currentTimeMillis() - startDeleteTime) {
+                        cancelTask(deleteTask, "delete timeout when waiting transaction commit");
+                    }
+                } finally {
+                    db.writeUnlock();
+                }
+                Thread.sleep(1000);
+            }
+        } catch (Exception e) {
+            String failMsg = "delete unknown, " + e.getMessage();
+            LOG.warn(failMsg, e);
+            throw new DdlException(failMsg);
+        }
+    }
+
+    public class DeleteTaskChecker extends Thread {
+        private BlockingQueue<DeleteTask> queue;
+
+        public DeleteTaskChecker(BlockingQueue<DeleteTask> queue) {
+            this.queue = queue;
+        }
+
+        @Override
+        public void run() {
+            LOG.info("delete task checker start");
+            try {
+                loop();
+            } finally {
+                synchronized(queue) {
+                    queue.clear();
+                }
+            }
+        }
+
+        public void loop() {
+            while (true) {
+                try {
+                    DeleteTask task = queue.take();
+                    while (!task.isQuorum()) {
+                        long signature = task.getSignature();
+                        if (task.isCancel()) {
+                            break;
+                        }
+                        if (!executor.submit(task)) {
+                            Thread.sleep(1000);
+                            continue;
+                        }
+                        // re add to the tail
+                        queue.put(task);
+                    }
+                    // remove task isQuorum or isCanceled
+                    removeTask(task);
+                } catch (InterruptedException e) {
+                    // do nothing
+                }
+            }
+        }
+    }
+
+    private void commitTask(DeleteTask task, Database db) {
+        long transactionId = task.getSignature();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        TransactionState transactionState = globalTransactionMgr.getTransactionState(transactionId);
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        db.writeLock();
+        try {
+            TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+            for (TabletDeleteInfo tDeleteInfo : task.getTabletDeleteInfo()) {
+                for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                    // the inverted index contains rolling up replica
+                    Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                    if (tabletId == null) {
+                        LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                        continue;
+                    }
+                    tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+                }
+            }
+            globalTransactionMgr.commitTransaction(db.getId(), transactionId, tabletCommitInfos);
 
 Review comment:
   use `commitAndPublishTransaction()` instead, and then `afterCommit()` can be removed.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman merged pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
morningman merged pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast

Posted by GitBox <gi...@apache.org>.
xy720 commented on a change in pull request #3191: [Optimize][Delete] Simplify the delete process to make it fast
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408762059
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,185 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    // jobId(listenerId). use in beginTransaction to callback function
+    private long id;
+    // transaction id.
+    private long signature;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
+    private Set<PushTask> pushTasks;
+    private DeleteInfo deleteInfo;
+
+    public DeleteJob(long id, long transactionId, DeleteInfo deleteInfo) {
+        this.id = id;
+        this.signature = transactionId;
+        this.deleteInfo = deleteInfo;
+        totalTablets = Sets.newHashSet();
+        finishedTablets = Sets.newHashSet();
+        quorumTablets = Sets.newHashSet();
+        tabletDeleteInfoMap = Maps.newConcurrentMap();
+        pushTasks = Sets.newHashSet();
+        state = DeleteState.UN_QUORUM;
+    }
+
+    /**
+     * check and update if this job's state is QUORUM_FINISHED or FINISHED
+     * The meaning of state:
+     * QUORUM_FINISHED: For each tablet there are more than half of its replicas have been finished
+     * FINISHED: All replicas of this jobs have finished
+     */
+    public void checkAndUpdateQuorum() throws MetaNotFoundException {
+        long dbId = deleteInfo.getDbId();
+        long tableId = deleteInfo.getTableId();
+        long partitionId = deleteInfo.getPartitionId();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            throw new MetaNotFoundException("can not find database "+ dbId +" when commit delete");
+        }
+
+        short replicaNum = -1;
+        db.readLock();
+        try {
+            OlapTable table = (OlapTable) db.getTable(tableId);
+            if (table == null) {
+                throw new MetaNotFoundException("can not find table "+ tableId +" when commit delete");
+            }
+            replicaNum = table.getPartitionInfo().getReplicationNum(partitionId);
+        } finally {
+            db.readUnlock();
+        }
+
+        short quorumNum = (short) (replicaNum / 2 + 1);
+        for (TabletDeleteInfo tDeleteInfo : getTabletDeleteInfo()) {
+            if (tDeleteInfo.getFinishedReplicas().size() == replicaNum) {
+                finishedTablets.add(tDeleteInfo.getTabletId());
+            }
+            if (tDeleteInfo.getFinishedReplicas().size() >= quorumNum) {
+                quorumTablets.add(tDeleteInfo.getTabletId());
+            }
+        }
+        LOG.info("check delete job quorum, transaction id: {}, total tablets: {}, quorum tablets: {},",
+                signature, totalTablets.size(), quorumTablets.size());
+
+        if (finishedTablets.containsAll(totalTablets)) {
+            setState(DeleteState.FINISHED);
+        } else if (quorumTablets.containsAll(totalTablets)) {
+            setState(DeleteState.QUORUM_FINISHED);
+        }
+    }
+
+    public void setState(DeleteState state) {
+        this.state = state;
+    }
+
+    public DeleteState getState() {
+        return this.state;
+    }
+
+    public boolean addTablet(long tabletId) {
+        return totalTablets.add(tabletId);
+    }
+
+    public boolean addPushTask(PushTask pushTask) {
+        return pushTasks.add(pushTask);
+    }
+
+    public boolean addFinishedReplica(long tabletId, Replica replica) {
+        TabletDeleteInfo tDeleteInfo = tabletDeleteInfoMap.get(tabletId);
 
 Review comment:
   done

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408105126
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,627 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.FeConstants;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    private enum CancelType {
+        METADATA_MISSING,
+        TIMEOUT,
+        COMMIT_FAIL,
+        UNKNOWN
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        try {
+            MarkedCountDownLatch<Long, Long> countDownLatch;
+            long transactionId = -1;
+            db.readLock();
+            try {
+                Table table = db.getTable(tableName);
+                if (table == null) {
+                    throw new DdlException("Table does not exist. name: " + tableName);
+                }
+
+                if (table.getType() != Table.TableType.OLAP) {
+                    throw new DdlException("Not olap type table. type: " + table.getType().name());
+                }
+                OlapTable olapTable = (OlapTable) table;
+
+                if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                    throw new DdlException("Table's state is not normal: " + tableName);
+                }
+
+                if (partitionName == null) {
+                    if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                        throw new DdlException("This is a range partitioned table."
+                                + " You should specify partition in delete stmt");
+                    } else {
+                        // this is a unpartitioned table, use table name as partition name
+                        partitionName = olapTable.getName();
+                    }
+                }
+
+                Partition partition = olapTable.getPartition(partitionName);
+                if (partition == null) {
+                    throw new DdlException("Partition does not exist. name: " + partitionName);
+                }
+
+                List<String> deleteConditions = Lists.newArrayList();
+
+                // pre check
+                checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+                // generate label
+                String label = "delete_" + UUID.randomUUID();
+                //generate jobId
+                long jobId = Catalog.getCurrentCatalog().getNextId();
+                // begin txn here and generate txn id
+                transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                        Lists.newArrayList(table.getId()), label, null, "FE: " + FrontendOptions.getLocalHostAddress(),
+                        TransactionState.LoadJobSourceType.FRONTEND, jobId, Config.stream_load_default_timeout_second);
+
+                DeleteInfo deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                        partition.getId(), partitionName,
+                        -1, 0, deleteConditions);
+                deleteJob = new DeleteJob(jobId, transactionId, deleteInfo);
+                idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+
+                Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+                // task sent to be
+                AgentBatchTask batchTask = new AgentBatchTask();
+                // count total replica num
+                int totalReplicaNum = 0;
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    for (Tablet tablet : index.getTablets()) {
+                        totalReplicaNum += tablet.getReplicas().size();
+                    }
+                }
+                countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    long indexId = index.getId();
+                    int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                    for (Tablet tablet : index.getTablets()) {
+                        long tabletId = tablet.getId();
+
+                        // set push type
+                        TPushType type = TPushType.DELETE;
+
+                        for (Replica replica : tablet.getReplicas()) {
+                            long replicaId = replica.getId();
+                            long backendId = replica.getBackendId();
+                            countDownLatch.addMark(backendId, tabletId);
+
+                            // create push task for each replica
+                            PushTask pushTask = new PushTask(null,
+                                    replica.getBackendId(), db.getId(), olapTable.getId(),
+                                    partition.getId(), indexId,
+                                    tabletId, replicaId, schemaHash,
+                                    -1, 0, "", -1, 0,
+                                    -1, type, conditions,
+                                    true, TPriority.NORMAL,
+                                    TTaskType.REALTIME_PUSH,
+                                    transactionId,
+                                    Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                            pushTask.setIsSchemaChanging(false);
+                            pushTask.setCountDownLatch(countDownLatch);
+
+                            if (AgentTaskQueue.addTask(pushTask)) {
+                                batchTask.addTask(pushTask);
+                                deleteJob.addPushTask(pushTask);
+                                deleteJob.addTablet(tabletId);
+                            }
+                        }
+                    }
+                }
+
+                // submit push tasks
+                if (batchTask.getTaskNum() > 0) {
+                    AgentTaskExecutor.submit(batchTask);
+                }
+
+            } catch (Throwable t) {
+                LOG.warn("error occurred during delete process", t);
+                // if transaction has been begun, need to abort it
+                if (Catalog.getCurrentGlobalTransactionMgr().getTransactionState(transactionId) != null) {
+                    cancelJob(deleteJob, CancelType.UNKNOWN, t.getMessage());
+                }
+                throw new DdlException(t.getMessage(), t);
+            } finally {
+                db.readUnlock();
+            }
+
+            long timeoutMs = deleteJob.getTimeoutMs();
+            LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+            boolean ok = false;
+            try {
+                ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+            } catch (InterruptedException e) {
+                LOG.warn("InterruptedException: ", e);
+                ok = false;
+            }
+
+            if (!ok) {
+                try {
+                    deleteJob.checkAndUpdateQuorum();
+                } catch (MetaNotFoundException e) {
+                    cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                    throw new DdlException(e.getMessage(), e);
+                }
+                DeleteState state = deleteJob.getState();
+                switch (state) {
+                    case UN_QUORUM:
+                        List<Entry<Long, Long>> unfinishedMarks = countDownLatch.getLeftMarks();
+                        // only show at most 5 results
+                        List<Entry<Long, Long>> subList = unfinishedMarks.subList(0, Math.min(unfinishedMarks.size(), 5));
+                        String errMsg = "Unfinished replicas:" + Joiner.on(", ").join(subList);
+                        LOG.warn("delete job timeout: transactionId {}, {}", transactionId, errMsg);
+                        cancelJob(deleteJob, CancelType.TIMEOUT, "delete job timeout");
+                        throw new DdlException("failed to delete replicas from job: " + transactionId + ", " + errMsg);
+                    case QUORUM_FINISHED:
+                    case FINISHED:
+                        try {
+                            long nowQuorumTimeMs = System.currentTimeMillis();
+                            long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                            // if job's state is quorum_finished then wait for a period of time and commit it.
+                            while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                                deleteJob.checkAndUpdateQuorum();
+                                Thread.sleep(1000);
+                                nowQuorumTimeMs = System.currentTimeMillis();
+                            }
+                        } catch (MetaNotFoundException e) {
+                            cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        } catch (InterruptedException e) {
+                            cancelJob(deleteJob, CancelType.UNKNOWN, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        }
+                        commitJob(deleteJob, db, timeoutMs);
+                        break;
+                    default:
+                        Preconditions.checkState(false, "wrong delete job state: " + state.name());
+                        break;
+                }
+            } else {
+                commitJob(deleteJob, db, timeoutMs);
+            }
+        } finally {
+            if (!FeConstants.runningUnitTest) {
+                clearJob(deleteJob);
+            }
+        }
+    }
+
+    private void commitJob(DeleteJob job, Database db, long timeoutMs) throws DdlException {
+        TransactionStatus status = null;
+        try {
+            unprotectedCommitJob(job, db, timeoutMs);
+            status = Catalog.getCurrentGlobalTransactionMgr().
+                    getTransactionState(job.getTransactionId()).getTransactionStatus();
+        } catch (UserException e) {
+            cancelJob(job, CancelType.COMMIT_FAIL, e.getMessage());
+            throw new DdlException(e.getMessage(), e);
+        }
+
+        switch (status) {
+            case COMMITTED:
+                // Although publish is unfinished we should tell user that commit already success.
+                throw new DdlException("delete job is committed but may be taking effect later, transactionId: " + job.getTransactionId());
+            case VISIBLE:
+                break;
+            default:
+                Preconditions.checkState(false, "wrong transaction status: " + status.name());
+                break;
+        }
+    }
+
+    /**
+     * unprotected commit delete job
+     * return true when successfully commit and publish
+     * return false when successfully commit but publish unfinished.
+     * A UserException thrown if both commit and publish failed.
+     * @param job
+     * @param db
+     * @param timeoutMs
+     * @return
+     * @throws UserException
+     */
+    private boolean unprotectedCommitJob(DeleteJob job, Database db, long timeoutMs) throws UserException {
+        long transactionId = job.getTransactionId();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+        for (TabletDeleteInfo tDeleteInfo : job.getTabletDeleteInfo()) {
+            for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                // the inverted index contains rolling up replica
+                Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                if (tabletId == null) {
+                    LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                    continue;
+                }
+                tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+            }
+        }
+        return globalTransactionMgr.commitAndPublishTransaction(db, transactionId, tabletCommitInfos, timeoutMs);
+    }
+
+    /**
+     * This method should always be called in the end of the delete process to clean the job.
+     * Better put it in finally block.
+     * @param job
+     */
+    private void clearJob(DeleteJob job) {
+        if (job != null) {
+            long signature = job.getTransactionId();
+            if (idToDeleteJob.containsKey(signature)) {
+                idToDeleteJob.remove(signature);
+            }
+            for (PushTask pushTask : job.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().removeCallback(job.getId());
+        }
+    }
+
+    public void recordFinishedJob(DeleteJob job) {
+        if (job != null) {
+            long dbId = job.getDeleteInfo().getDbId();
+            LOG.info("record finished deleteJob, transactionId {}, dbId {}",
+                    job.getTransactionId(), dbId);
+            List<DeleteInfo> deleteInfoList = dbToDeleteInfos.get(dbId);
+            if (deleteInfoList == null) {
+                deleteInfoList = Lists.newArrayList();
+                dbToDeleteInfos.put(dbId, deleteInfoList);
+            }
+            deleteInfoList.add(job.getDeleteInfo());
+        }
+    }
+
+    public boolean cancelJob(DeleteJob job, CancelType cancelType, String reason) {
+        LOG.info("start to cancel delete job, transactionId: {}, cancelType: {}", job.getTransactionId(), cancelType.name());
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        try {
+            if (job != null) {
+                globalTransactionMgr.abortTransaction(job.getTransactionId(), reason);
+            }
+        } catch (Exception e) {
+            TransactionState state = globalTransactionMgr.getTransactionState(job.getTransactionId());
+            if (state == null) {
+                LOG.warn("cancel delete job failed because txn not found, transactionId: {}", job.getTransactionId());
+            } else if (state.getTransactionStatus() == TransactionStatus.COMMITTED || state.getTransactionStatus() == TransactionStatus.VISIBLE) {
+                LOG.warn("cancel delete job {} failed because it has been committed, transactionId: {}", job.getTransactionId());
+            } else {
+                LOG.warn("errors while abort transaction", e);
+            }
+            return false;
+        }
+        return true;
+    }
+
+    public DeleteJob getDeleteJob(long transactionId) {
+        return idToDeleteJob.get(transactionId);
+    }
+
+    private void checkDeleteV2(OlapTable table, Partition partition, List<Predicate> conditions, List<String> deleteConditions, boolean preCheck)
+            throws DdlException {
+
+        // check partition state
+        Partition.PartitionState state = partition.getState();
+        if (state != Partition.PartitionState.NORMAL) {
+            // ErrorReport.reportDdlException(ErrorCode.ERR_BAD_PARTITION_STATE, partition.getName(), state.name());
+            throw new DdlException("Partition[" + partition.getName() + "]' state is not NORMAL: " + state.name());
+        }
+
+        // check condition column is key column and condition value
+        Map<String, Column> nameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+        for (Column column : table.getBaseSchema()) {
+            nameToColumn.put(column.getName(), column);
+        }
+        for (Predicate condition : conditions) {
+            SlotRef slotRef = null;
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                slotRef = (SlotRef) binaryPredicate.getChild(0);
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                slotRef = (SlotRef) isNullPredicate.getChild(0);
+            }
+            String columnName = slotRef.getColumnName();
+            if (!nameToColumn.containsKey(columnName)) {
+                ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, table.getName());
+            }
+
+            Column column = nameToColumn.get(columnName);
+            if (!column.isKey()) {
+                // ErrorReport.reportDdlException(ErrorCode.ERR_NOT_KEY_COLUMN, columnName);
+                throw new DdlException("Column[" + columnName + "] is not key column");
+            }
+
+            if (condition instanceof BinaryPredicate) {
+                String value = null;
+                try {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    value = ((LiteralExpr) binaryPredicate.getChild(1)).getStringValue();
+                    LiteralExpr.create(value, Type.fromPrimitiveType(column.getDataType()));
+                } catch (AnalysisException e) {
+                    // ErrorReport.reportDdlException(ErrorCode.ERR_INVALID_VALUE, value);
+                    throw new DdlException("Invalid column value[" + value + "]");
+                }
+            }
+
+            // set schema column name
+            slotRef.setCol(column.getName());
+        }
+        Map<Long, List<Column>> indexIdToSchema = table.getIndexIdToSchema();
+        for (MaterializedIndex index : partition.getMaterializedIndices(MaterializedIndex.IndexExtState.VISIBLE)) {
+            // check table has condition column
+            Map<String, Column> indexColNameToColumn = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+            for (Column column : indexIdToSchema.get(index.getId())) {
+                indexColNameToColumn.put(column.getName(), column);
+            }
+            String indexName = table.getIndexNameById(index.getId());
+            for (Predicate condition : conditions) {
+                String columnName = null;
+                if (condition instanceof BinaryPredicate) {
+                    BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                    columnName = ((SlotRef) binaryPredicate.getChild(0)).getColumnName();
+                } else if (condition instanceof IsNullPredicate) {
+                    IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                    columnName = ((SlotRef) isNullPredicate.getChild(0)).getColumnName();
+                }
+                Column column = indexColNameToColumn.get(columnName);
+                if (column == null) {
+                    ErrorReport.reportDdlException(ErrorCode.ERR_BAD_FIELD_ERROR, columnName, indexName);
+                }
+
+                if (table.getKeysType() == KeysType.DUP_KEYS && !column.isKey()) {
+                    throw new DdlException("Column[" + columnName + "] is not key column in index[" + indexName + "]");
+                }
+            }
+        }
+
+        if (deleteConditions == null) {
+            return;
+        }
+
+        // save delete conditions
+        for (Predicate condition : conditions) {
+            if (condition instanceof BinaryPredicate) {
+                BinaryPredicate binaryPredicate = (BinaryPredicate) condition;
+                SlotRef slotRef = (SlotRef) binaryPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName).append(" ").append(binaryPredicate.getOp().name()).append(" \"")
+                        .append(((LiteralExpr) binaryPredicate.getChild(1)).getStringValue()).append("\"");
+                deleteConditions.add(sb.toString());
+            } else if (condition instanceof IsNullPredicate) {
+                IsNullPredicate isNullPredicate = (IsNullPredicate) condition;
+                SlotRef slotRef = (SlotRef) isNullPredicate.getChild(0);
+                String columnName = slotRef.getColumnName();
+                StringBuilder sb = new StringBuilder();
+                sb.append(columnName);
+                if (isNullPredicate.isNotNull()) {
+                    sb.append(" IS NOT NULL");
+                } else {
+                    sb.append(" IS NULL");
+                }
+                deleteConditions.add(sb.toString());
+            }
+        }
+    }
+
+    // show delete stmt
+    public List<List<Comparable>> getDeleteInfosByDb(long dbId, boolean forUser) {
+        LinkedList<List<Comparable>> infos = new LinkedList<List<Comparable>>();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            return infos;
+        }
+
+        String dbName = db.getFullName();
+        List<DeleteInfo> deleteInfos = dbToDeleteInfos.get(dbId);
+        if (deleteInfos == null) {
+            return infos;
+        }
+
+        for (DeleteInfo deleteInfo : deleteInfos) {
+
+            if (!Catalog.getCurrentCatalog().getAuth().checkTblPriv(ConnectContext.get(), dbName,
+                    deleteInfo.getTableName(),
+                    PrivPredicate.LOAD)) {
+                continue;
+            }
+
+
+            List<Comparable> info = Lists.newArrayList();
+            if (!forUser) {
+                info.add(-1L);
+                info.add(deleteInfo.getTableId());
+            }
+            info.add(deleteInfo.getTableName());
+            if (!forUser) {
+                info.add(deleteInfo.getPartitionId());
+            }
+            info.add(deleteInfo.getPartitionName());
+
+            info.add(TimeUtils.longToTimeString(deleteInfo.getCreateTimeMs()));
+            String conds = Joiner.on(", ").join(deleteInfo.getDeleteConditions());
+            info.add(conds);
+
+            if (!forUser) {
+                info.add(deleteInfo.getPartitionVersion());
+                info.add(deleteInfo.getPartitionVersionHash());
+            }
+            // for loading state, should not display loading, show deleting instead
+//                if (loadJob.getState() == LoadJob.JobState.LOADING) {
+//                    info.add("DELETING");
+//                } else {
+//                    info.add(loadJob.getState().name());
+//                }
+            info.add("FINISHED");
+            infos.add(info);
+        }
+        // sort by createTimeMs
+        int sortIndex;
+        if (!forUser) {
+            sortIndex = 5;
+        } else {
+            sortIndex = 2;
+        }
+        ListComparator<List<Comparable>> comparator = new ListComparator<List<Comparable>>(sortIndex);
+        Collections.sort(infos, comparator);
+        return infos;
+    }
+
+    public void replayDelete(DeleteInfo deleteInfo, Catalog catalog) {
+        Database db = catalog.getDb(deleteInfo.getDbId());
+        db.writeLock();
 
 Review comment:
   No need db lock here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r408104284
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,627 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.FeConstants;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.doris.transaction.TransactionStatus;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    private enum CancelType {
+        METADATA_MISSING,
+        TIMEOUT,
+        COMMIT_FAIL,
+        UNKNOWN
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        try {
+            MarkedCountDownLatch<Long, Long> countDownLatch;
+            long transactionId = -1;
+            db.readLock();
+            try {
+                Table table = db.getTable(tableName);
+                if (table == null) {
+                    throw new DdlException("Table does not exist. name: " + tableName);
+                }
+
+                if (table.getType() != Table.TableType.OLAP) {
+                    throw new DdlException("Not olap type table. type: " + table.getType().name());
+                }
+                OlapTable olapTable = (OlapTable) table;
+
+                if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                    throw new DdlException("Table's state is not normal: " + tableName);
+                }
+
+                if (partitionName == null) {
+                    if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                        throw new DdlException("This is a range partitioned table."
+                                + " You should specify partition in delete stmt");
+                    } else {
+                        // this is a unpartitioned table, use table name as partition name
+                        partitionName = olapTable.getName();
+                    }
+                }
+
+                Partition partition = olapTable.getPartition(partitionName);
+                if (partition == null) {
+                    throw new DdlException("Partition does not exist. name: " + partitionName);
+                }
+
+                List<String> deleteConditions = Lists.newArrayList();
+
+                // pre check
+                checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+                // generate label
+                String label = "delete_" + UUID.randomUUID();
+                //generate jobId
+                long jobId = Catalog.getCurrentCatalog().getNextId();
+                // begin txn here and generate txn id
+                transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                        Lists.newArrayList(table.getId()), label, null, "FE: " + FrontendOptions.getLocalHostAddress(),
+                        TransactionState.LoadJobSourceType.FRONTEND, jobId, Config.stream_load_default_timeout_second);
+
+                DeleteInfo deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                        partition.getId(), partitionName,
+                        -1, 0, deleteConditions);
+                deleteJob = new DeleteJob(jobId, transactionId, deleteInfo);
+                idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+
+                Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+                // task sent to be
+                AgentBatchTask batchTask = new AgentBatchTask();
+                // count total replica num
+                int totalReplicaNum = 0;
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    for (Tablet tablet : index.getTablets()) {
+                        totalReplicaNum += tablet.getReplicas().size();
+                    }
+                }
+                countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+                for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                    long indexId = index.getId();
+                    int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                    for (Tablet tablet : index.getTablets()) {
+                        long tabletId = tablet.getId();
+
+                        // set push type
+                        TPushType type = TPushType.DELETE;
+
+                        for (Replica replica : tablet.getReplicas()) {
+                            long replicaId = replica.getId();
+                            long backendId = replica.getBackendId();
+                            countDownLatch.addMark(backendId, tabletId);
+
+                            // create push task for each replica
+                            PushTask pushTask = new PushTask(null,
+                                    replica.getBackendId(), db.getId(), olapTable.getId(),
+                                    partition.getId(), indexId,
+                                    tabletId, replicaId, schemaHash,
+                                    -1, 0, "", -1, 0,
+                                    -1, type, conditions,
+                                    true, TPriority.NORMAL,
+                                    TTaskType.REALTIME_PUSH,
+                                    transactionId,
+                                    Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                            pushTask.setIsSchemaChanging(false);
+                            pushTask.setCountDownLatch(countDownLatch);
+
+                            if (AgentTaskQueue.addTask(pushTask)) {
+                                batchTask.addTask(pushTask);
+                                deleteJob.addPushTask(pushTask);
+                                deleteJob.addTablet(tabletId);
+                            }
+                        }
+                    }
+                }
+
+                // submit push tasks
+                if (batchTask.getTaskNum() > 0) {
+                    AgentTaskExecutor.submit(batchTask);
+                }
+
+            } catch (Throwable t) {
+                LOG.warn("error occurred during delete process", t);
+                // if transaction has been begun, need to abort it
+                if (Catalog.getCurrentGlobalTransactionMgr().getTransactionState(transactionId) != null) {
+                    cancelJob(deleteJob, CancelType.UNKNOWN, t.getMessage());
+                }
+                throw new DdlException(t.getMessage(), t);
+            } finally {
+                db.readUnlock();
+            }
+
+            long timeoutMs = deleteJob.getTimeoutMs();
+            LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+            boolean ok = false;
+            try {
+                ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+            } catch (InterruptedException e) {
+                LOG.warn("InterruptedException: ", e);
+                ok = false;
+            }
+
+            if (!ok) {
+                try {
+                    deleteJob.checkAndUpdateQuorum();
+                } catch (MetaNotFoundException e) {
+                    cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                    throw new DdlException(e.getMessage(), e);
+                }
+                DeleteState state = deleteJob.getState();
+                switch (state) {
+                    case UN_QUORUM:
+                        List<Entry<Long, Long>> unfinishedMarks = countDownLatch.getLeftMarks();
+                        // only show at most 5 results
+                        List<Entry<Long, Long>> subList = unfinishedMarks.subList(0, Math.min(unfinishedMarks.size(), 5));
+                        String errMsg = "Unfinished replicas:" + Joiner.on(", ").join(subList);
+                        LOG.warn("delete job timeout: transactionId {}, {}", transactionId, errMsg);
+                        cancelJob(deleteJob, CancelType.TIMEOUT, "delete job timeout");
+                        throw new DdlException("failed to delete replicas from job: " + transactionId + ", " + errMsg);
+                    case QUORUM_FINISHED:
+                    case FINISHED:
+                        try {
+                            long nowQuorumTimeMs = System.currentTimeMillis();
+                            long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                            // if job's state is quorum_finished then wait for a period of time and commit it.
+                            while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                                deleteJob.checkAndUpdateQuorum();
+                                Thread.sleep(1000);
+                                nowQuorumTimeMs = System.currentTimeMillis();
+                            }
+                        } catch (MetaNotFoundException e) {
+                            cancelJob(deleteJob, CancelType.METADATA_MISSING, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        } catch (InterruptedException e) {
+                            cancelJob(deleteJob, CancelType.UNKNOWN, e.getMessage());
+                            throw new DdlException(e.getMessage(), e);
+                        }
+                        commitJob(deleteJob, db, timeoutMs);
+                        break;
+                    default:
+                        Preconditions.checkState(false, "wrong delete job state: " + state.name());
+                        break;
+                }
+            } else {
+                commitJob(deleteJob, db, timeoutMs);
+            }
+        } finally {
+            if (!FeConstants.runningUnitTest) {
+                clearJob(deleteJob);
+            }
+        }
+    }
+
+    private void commitJob(DeleteJob job, Database db, long timeoutMs) throws DdlException {
+        TransactionStatus status = null;
+        try {
+            unprotectedCommitJob(job, db, timeoutMs);
+            status = Catalog.getCurrentGlobalTransactionMgr().
+                    getTransactionState(job.getTransactionId()).getTransactionStatus();
+        } catch (UserException e) {
+            cancelJob(job, CancelType.COMMIT_FAIL, e.getMessage());
+            throw new DdlException(e.getMessage(), e);
+        }
+
+        switch (status) {
+            case COMMITTED:
+                // Although publish is unfinished we should tell user that commit already success.
+                throw new DdlException("delete job is committed but may be taking effect later, transactionId: " + job.getTransactionId());
+            case VISIBLE:
+                break;
+            default:
+                Preconditions.checkState(false, "wrong transaction status: " + status.name());
+                break;
+        }
+    }
+
+    /**
+     * unprotected commit delete job
+     * return true when successfully commit and publish
+     * return false when successfully commit but publish unfinished.
+     * A UserException thrown if both commit and publish failed.
+     * @param job
+     * @param db
+     * @param timeoutMs
+     * @return
+     * @throws UserException
+     */
+    private boolean unprotectedCommitJob(DeleteJob job, Database db, long timeoutMs) throws UserException {
+        long transactionId = job.getTransactionId();
+        GlobalTransactionMgr globalTransactionMgr = Catalog.getCurrentGlobalTransactionMgr();
+        List<TabletCommitInfo> tabletCommitInfos = new ArrayList<TabletCommitInfo>();
+        TabletInvertedIndex invertedIndex = Catalog.getCurrentInvertedIndex();
+        for (TabletDeleteInfo tDeleteInfo : job.getTabletDeleteInfo()) {
+            for (Replica replica : tDeleteInfo.getFinishedReplicas()) {
+                // the inverted index contains rolling up replica
+                Long tabletId = invertedIndex.getTabletIdByReplica(replica.getId());
+                if (tabletId == null) {
+                    LOG.warn("could not find tablet id for replica {}, the tablet maybe dropped", replica);
+                    continue;
+                }
+                tabletCommitInfos.add(new TabletCommitInfo(tabletId, replica.getBackendId()));
+            }
+        }
+        return globalTransactionMgr.commitAndPublishTransaction(db, transactionId, tabletCommitInfos, timeoutMs);
+    }
+
+    /**
+     * This method should always be called in the end of the delete process to clean the job.
+     * Better put it in finally block.
+     * @param job
+     */
+    private void clearJob(DeleteJob job) {
+        if (job != null) {
+            long signature = job.getTransactionId();
+            if (idToDeleteJob.containsKey(signature)) {
+                idToDeleteJob.remove(signature);
+            }
+            for (PushTask pushTask : job.getPushTasks()) {
+                AgentTaskQueue.removePushTask(pushTask.getBackendId(), pushTask.getSignature(),
+                        pushTask.getVersion(), pushTask.getVersionHash(),
+                        pushTask.getPushType(), pushTask.getTaskType());
+            }
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().removeCallback(job.getId());
+        }
+    }
+
+    public void recordFinishedJob(DeleteJob job) {
+        if (job != null) {
+            long dbId = job.getDeleteInfo().getDbId();
+            LOG.info("record finished deleteJob, transactionId {}, dbId {}",
+                    job.getTransactionId(), dbId);
+            List<DeleteInfo> deleteInfoList = dbToDeleteInfos.get(dbId);
+            if (deleteInfoList == null) {
+                deleteInfoList = Lists.newArrayList();
+                dbToDeleteInfos.put(dbId, deleteInfoList);
+            }
+            deleteInfoList.add(job.getDeleteInfo());
+        }
+    }
+
+    public boolean cancelJob(DeleteJob job, CancelType cancelType, String reason) {
 
 Review comment:
   And if transaction is COMMITTED but not VISIBLE, you should return a transaction id to user, so that user can use that it to check transaction's state.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407065663
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,549 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        MarkedCountDownLatch<Long, Long> countDownLatch;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+            deleteJob = new DeleteJob(transactionId, deleteInfo);
+            idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
 
 Review comment:
   If you put deleteInfo in `idToDeleteJob`, you need to make sure that the `deleteInfo` will be cleaned finally, even if any exception is thrown.
   So I think you should clear the deleteInfo in `finally` block.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407065902
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/task/DeleteJob.java
 ##########
 @@ -0,0 +1,170 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.task;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.load.DeleteInfo;
+import org.apache.doris.load.TabletDeleteInfo;
+import org.apache.doris.transaction.AbstractTxnStateChangeCallback;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+public class DeleteJob extends AbstractTxnStateChangeCallback {
+    private static final Logger LOG = LogManager.getLogger(DeleteJob.class);
+
+    public enum DeleteState {
+        UN_QUORUM,
+        QUORUM_FINISHED,
+        FINISHED
+    }
+
+    private DeleteState state;
+
+    private long signature;
+    private Set<Long> totalTablets;
+    private Set<Long> quorumTablets;
+    private Set<Long> finishedTablets;
+    Map<Long, TabletDeleteInfo> tabletDeleteInfoMap;
+    private Set<PushTask> pushTasks;
+    private DeleteInfo deleteInfo;
+
+    public DeleteJob(long transactionId, DeleteInfo deleteInfo) {
+        this.signature = transactionId;
+        this.deleteInfo = deleteInfo;
+        totalTablets = Sets.newHashSet();
+        finishedTablets = Sets.newHashSet();
+        quorumTablets = Sets.newHashSet();
+        tabletDeleteInfoMap = Maps.newConcurrentMap();
+        pushTasks = Sets.newHashSet();
+        state = DeleteState.UN_QUORUM;
+    }
+
+    public void checkQuorum() throws DdlException {
+        long dbId = deleteInfo.getDbId();
+        long tableId = deleteInfo.getTableId();
+        long partitionId = deleteInfo.getPartitionId();
+        Database db = Catalog.getInstance().getDb(dbId);
+        if (db == null) {
+            LOG.warn("can not find database "+ dbId +" when commit delete");
+            return;
+        }
+
+        short replicaNum = -1;
+        db.readLock();
+        try {
+            OlapTable table = (OlapTable) db.getTable(tableId);
+            if (table == null) {
+                LOG.warn("can not find table "+ tableId +" when commit delete");
+                return;
+            }
+
+            replicaNum = table.getPartitionInfo().getReplicationNum(partitionId);
+        } finally {
+            db.readUnlock();
+        }
+
+        short quorumNum = (short) (replicaNum / 2 + 1);
+        for (TabletDeleteInfo tDeleteInfo : getTabletDeleteInfo()) {
+            if (tDeleteInfo.getFinishedReplicas().size() == replicaNum) {
+                finishedTablets.add(tDeleteInfo.getTabletId());
+            }
+            if (tDeleteInfo.getFinishedReplicas().size() >= quorumNum) {
+                quorumTablets.add(tDeleteInfo.getTabletId());
+            }
+        }
+        LOG.info("check delete job quorum, transaction id: {}, total tablets: {}, quorum tablets: {},",
+                signature, totalTablets.size(), quorumTablets.size());
+
+        if (finishedTablets.containsAll(totalTablets)) {
+            setState(DeleteState.FINISHED);
+        } else if (quorumTablets.containsAll(totalTablets)) {
+            setState(DeleteState.QUORUM_FINISHED);
+        }
+    }
+
+    public void setState(DeleteState state) {
+        this.state = state;
+    }
+
+    public DeleteState getState() {
+        return this.state;
+    }
+
+    public boolean addTablet(long tabletId) {
+        return totalTablets.add(tabletId);
+    }
+
+    public boolean addPushTask(PushTask pushTask) {
+        return pushTasks.add(pushTask);
+    }
+
+    public boolean addFinishedReplica(long tabletId, Replica replica) {
+        TabletDeleteInfo tDeleteInfo = tabletDeleteInfoMap.get(tabletId);
+        if (tDeleteInfo == null) {
+            tDeleteInfo = new TabletDeleteInfo(tabletId);
+            tabletDeleteInfoMap.put(tabletId, tDeleteInfo);
+        }
+        return tDeleteInfo.addFinishedReplica(replica);
+    }
+
+    public DeleteInfo getDeleteInfo() {
+        return deleteInfo;
+    }
+
+    public Set<PushTask> getPushTasks() {
+        return pushTasks;
+    }
+
+    @Override
+    public long getId() {
+        return this.signature;
+    }
+
+    @Override
+    public void afterVisible(TransactionState txnState, boolean txnOperated) {
+        Catalog catalog = Catalog.getInstance();
+        catalog.getEditLog().logFinishSyncDelete(deleteInfo);
+        catalog.getDeleteHandler().recordFinishedJob(this);
+    }
+
+    public long getTransactionId() {
+        return this.signature;
+    }
+
+    public Collection<TabletDeleteInfo> getTabletDeleteInfo() {
+        return tabletDeleteInfoMap.values();
+    }
+
+    public long getTimeout() {
 
 Review comment:
   ```suggestion
       public long getTimeoutMs() {
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


[GitHub] [incubator-doris] morningman commented on a change in pull request #3191: DeleteV2

Posted by GitBox <gi...@apache.org>.
morningman commented on a change in pull request #3191: DeleteV2
URL: https://github.com/apache/incubator-doris/pull/3191#discussion_r407067181
 
 

 ##########
 File path: fe/src/main/java/org/apache/doris/load/DeleteHandler.java
 ##########
 @@ -0,0 +1,549 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.load;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.doris.analysis.BinaryPredicate;
+import org.apache.doris.analysis.DeleteStmt;
+import org.apache.doris.analysis.IsNullPredicate;
+import org.apache.doris.analysis.LiteralExpr;
+import org.apache.doris.analysis.Predicate;
+import org.apache.doris.analysis.SlotRef;
+import org.apache.doris.catalog.Catalog;
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.KeysType;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Table;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.catalog.TabletInvertedIndex;
+import org.apache.doris.catalog.Type;
+import org.apache.doris.common.AnalysisException;
+import org.apache.doris.common.Config;
+import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.ErrorReport;
+import org.apache.doris.common.MarkedCountDownLatch;
+import org.apache.doris.common.UserException;
+import org.apache.doris.common.io.Writable;
+import org.apache.doris.common.util.ListComparator;
+import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.service.FrontendOptions;
+import org.apache.doris.task.AgentBatchTask;
+import org.apache.doris.task.AgentTaskExecutor;
+import org.apache.doris.task.AgentTaskQueue;
+import org.apache.doris.task.DeleteJob;
+import org.apache.doris.task.DeleteJob.DeleteState;
+import org.apache.doris.task.PushTask;
+import org.apache.doris.thrift.TPriority;
+import org.apache.doris.thrift.TPushType;
+import org.apache.doris.thrift.TTaskType;
+import org.apache.doris.transaction.GlobalTransactionMgr;
+import org.apache.doris.transaction.TabletCommitInfo;
+import org.apache.doris.transaction.TransactionState;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+public class DeleteHandler implements Writable {
+    private static final Logger LOG = LogManager.getLogger(DeleteHandler.class);
+
+    // TransactionId -> DeleteJob
+    private Map<Long, DeleteJob> idToDeleteJob;
+
+    // Db -> DeleteInfo list
+    private Map<Long, List<DeleteInfo>> dbToDeleteInfos;
+
+    public DeleteHandler() {
+        idToDeleteJob = Maps.newConcurrentMap();
+        dbToDeleteInfos = Maps.newConcurrentMap();
+    }
+
+    public void process(DeleteStmt stmt) throws DdlException {
+        String dbName = stmt.getDbName();
+        String tableName = stmt.getTableName();
+        String partitionName = stmt.getPartitionName();
+        List<Predicate> conditions = stmt.getDeleteConditions();
+        Database db = Catalog.getInstance().getDb(dbName);
+        if (db == null) {
+            throw new DdlException("Db does not exist. name: " + dbName);
+        }
+
+        DeleteJob deleteJob = null;
+        DeleteInfo deleteInfo = null;
+        long transactionId;
+        MarkedCountDownLatch<Long, Long> countDownLatch;
+        db.readLock();
+        try {
+            Table table = db.getTable(tableName);
+            if (table == null) {
+                throw new DdlException("Table does not exist. name: " + tableName);
+            }
+
+            if (table.getType() != Table.TableType.OLAP) {
+                throw new DdlException("Not olap type table. type: " + table.getType().name());
+            }
+            OlapTable olapTable = (OlapTable) table;
+
+            if (olapTable.getState() != OlapTable.OlapTableState.NORMAL) {
+                throw new DdlException("Table's state is not normal: " + tableName);
+            }
+
+            if (partitionName == null) {
+                if (olapTable.getPartitionInfo().getType() == PartitionType.RANGE) {
+                    throw new DdlException("This is a range partitioned table."
+                            + " You should specify partition in delete stmt");
+                } else {
+                    // this is a unpartitioned table, use table name as partition name
+                    partitionName = olapTable.getName();
+                }
+            }
+
+            Partition partition = olapTable.getPartition(partitionName);
+            if (partition == null) {
+                throw new DdlException("Partition does not exist. name: " + partitionName);
+            }
+
+            List<String> deleteConditions = Lists.newArrayList();
+
+            // pre check
+            checkDeleteV2(olapTable, partition, conditions, deleteConditions, true);
+
+            // generate label
+            String label = "delete_" + UUID.randomUUID();
+
+            // begin txn here and generate txn id
+            transactionId = Catalog.getCurrentGlobalTransactionMgr().beginTransaction(db.getId(),
+                    Lists.newArrayList(table.getId()), label,"FE: " + FrontendOptions.getLocalHostAddress(),
+                    TransactionState.LoadJobSourceType.FRONTEND, Config.stream_load_default_timeout_second);
+
+            deleteInfo = new DeleteInfo(db.getId(), olapTable.getId(), tableName,
+                    partition.getId(), partitionName,
+                    -1, 0, deleteConditions);
+            deleteJob = new DeleteJob(transactionId, deleteInfo);
+            idToDeleteJob.put(deleteJob.getTransactionId(), deleteJob);
+            Catalog.getCurrentGlobalTransactionMgr().getCallbackFactory().addCallback(deleteJob);
+            // task sent to be
+            AgentBatchTask batchTask = new AgentBatchTask();
+            // count total replica num
+            int totalReplicaNum = 0;
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                for (Tablet tablet : index.getTablets()) {
+                    totalReplicaNum += tablet.getReplicas().size();
+                }
+            }
+            countDownLatch = new MarkedCountDownLatch<Long, Long>(totalReplicaNum);
+
+            for (MaterializedIndex index : partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                long indexId = index.getId();
+                int schemaHash = olapTable.getSchemaHashByIndexId(indexId);
+
+                for (Tablet tablet : index.getTablets()) {
+                    long tabletId = tablet.getId();
+
+                    // set push type
+                    TPushType type = TPushType.DELETE;
+
+                    for (Replica replica : tablet.getReplicas()) {
+                        long replicaId = replica.getId();
+                        long backendId = replica.getBackendId();
+                        countDownLatch.addMark(backendId, tabletId);
+
+                        // create push task for each replica
+                        PushTask pushTask = new PushTask(null,
+                                replica.getBackendId(), db.getId(), olapTable.getId(),
+                                partition.getId(), indexId,
+                                tabletId, replicaId, schemaHash,
+                                -1, 0, "", -1, 0,
+                                -1, type, conditions,
+                                true, TPriority.NORMAL,
+                                TTaskType.REALTIME_PUSH,
+                                transactionId,
+                                Catalog.getCurrentGlobalTransactionMgr().getTransactionIDGenerator().getNextTransactionId());
+                        pushTask.setIsSchemaChanging(true);
+                        pushTask.setCountDownLatch(countDownLatch);
+
+                        if (AgentTaskQueue.addTask(pushTask)) {
+                            batchTask.addTask(pushTask);
+                            deleteJob.addPushTask(pushTask);
+                            deleteJob.addTablet(tabletId);
+                        }
+                    }
+                }
+            }
+
+            // submit push tasks
+            if (batchTask.getTaskNum() > 0) {
+                AgentTaskExecutor.submit(batchTask);
+            }
+
+        } catch (Throwable t) {
+            LOG.warn("error occurred during delete process", t);
+            throw new DdlException(t.getMessage(), t);
+        } finally {
+            db.readUnlock();
+        }
+
+        long timeoutMs = deleteJob.getTimeout();
+        LOG.info("waiting delete Job finish, signature: {}, timeout: {}", transactionId, timeoutMs);
+        boolean ok = false;
+        try {
+            ok = countDownLatch.await(timeoutMs, TimeUnit.MILLISECONDS);
+        } catch (InterruptedException e) {
+            LOG.warn("InterruptedException: ", e);
+            ok = false;
+        }
+
+        if (ok) {
+            commitJob(deleteJob, db, timeoutMs);
+        } else {
+            deleteJob.checkQuorum();
+            if (deleteJob.getState() != DeleteState.UN_QUORUM) {
+                long nowQuorumTimeMs = System.currentTimeMillis();
+                long endQuorumTimeoutMs = nowQuorumTimeMs + timeoutMs / 2;
+                // if job's state is finished or stay in quorum_finished for long time, try to commit it.
+                try {
+                    while (deleteJob.getState() == DeleteState.QUORUM_FINISHED && endQuorumTimeoutMs > nowQuorumTimeMs) {
+                        deleteJob.checkQuorum();
+                        Thread.sleep(1000);
+                        nowQuorumTimeMs = System.currentTimeMillis();
+                    }
 
 Review comment:
   After the `while` loop, the job may still failed.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org