You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by cloud-fan <gi...@git.apache.org> on 2018/09/10 13:53:04 UTC
[GitHub] spark pull request #21308: [SPARK-24253][SQL] Add DeleteSupport mix-in for D...
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21308#discussion_r216329544
--- Diff: sql/core/src/main/java/org/apache/spark/sql/sources/v2/DeleteSupport.java ---
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.sources.v2;
+
+import org.apache.spark.sql.sources.Filter;
+
+/**
+ * A mix-in interface for {@link DataSourceV2} delete support. Data sources can implement this
+ * interface to provide the ability to delete data from tables that matches filter expressions.
+ * <p>
+ * Data sources must implement this interface to support logical operations that combine writing
+ * data with deleting data, like overwriting partitions.
+ */
+public interface DeleteSupport extends DataSourceV2 {
+ /**
+ * Delete data from a data source table that matches filter expressions.
+ * <p>
+ * Rows are deleted from the data source iff all of the filter expressions match. That is, the
+ * expressions must be interpreted as a set of filters that are ANDed together.
+ * <p>
+ * Implementations may reject a delete operation if the delete isn't possible without significant
+ * effort. For example, partitioned data sources may reject deletes that do not filter by
+ * partition columns because the filter may require rewriting files without deleted records.
+ * To reject a delete implementations should throw {@link IllegalArgumentException} with a clear
+ * error message that identifies which expression was rejected.
+ *
+ * @param filters filter expressions, used to select rows to delete when all expressions match
+ * @throws IllegalArgumentException If the delete is rejected due to required effort
+ */
+ void deleteWhere(Filter[] filters);
--- End diff --
This seems different from what we discussed in the dev list about the new abstraction. I expect to see
```
Write newDeleteWrite(Filter[] filters);
```
Do I miss something?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org