You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by GitBox <gi...@apache.org> on 2022/06/06 09:33:10 UTC

[GitHub] [incubator-seatunnel] BenJFan commented on a diff in pull request #1978: [Feature][Transform] data quality for null data rate

BenJFan commented on code in PR #1978:
URL: https://github.com/apache/incubator-seatunnel/pull/1978#discussion_r889835436


##########
docs/en/transform/nullRate.md:
##########
@@ -0,0 +1,67 @@
+# EncryptedPhone

Review Comment:
   The header wrong



##########
docs/en/transform/nullRate.md:
##########
@@ -0,0 +1,67 @@
+# EncryptedPhone
+
+## Description
+
+When there is a large amount of data, the final result will always be greatly affected by the problem of data null value. Therefore, early null value detection is particularly important. For this reason, this function came into being
+
+:::tip
+
+This transform **ONLY** supported by Spark.
+
+:::
+
+## Options
+
+| name                     | type   | required | default value |
+| -------------------------| ------ | -------- | ------------- |
+| fields                   | string | yes      | -             |
+| rates                    | double | yes      | -             |
+| throwException_enable    | boolean| no       | -             |
+| result_table_name        | string | no       | -             |
+
+
+
+### field [string]
+
+Which fields do you want to monitor .
+
+### rates [double]

Review Comment:
   Should be `double_list`



##########
seatunnel-transforms/seatunnel-transforms-spark/seatunnel-transform-spark-null-rate/src/main/scala/org/apache/seatunnel/spark/transform/NullRate.scala:
##########
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.spark.transform
+
+import org.apache.seatunnel.apis.base.plugin.Plugin
+import org.apache.seatunnel.common.config.CheckConfigUtil._
+import org.apache.seatunnel.common.config.CheckResult
+import org.apache.seatunnel.spark.{BaseSparkTransform, SparkEnvironment}
+import org.apache.spark.sql.catalyst.encoders.RowEncoder
+import org.apache.spark.sql.types.{DataTypes, StructType}
+import org.apache.spark.sql.{Dataset, Row}
+
+import scala.collection.JavaConversions._
+
+class NullRate extends BaseSparkTransform {
+
+  override def process(df: Dataset[Row], env: SparkEnvironment): Dataset[Row] = {
+
+    val allCount = env.getSparkSession.sparkContext.longAccumulator("allCount")
+    val fieldsAndRates = config.getStringList(NullRateConfig.FIELDS).zip(config.getDoubleList(NullRateConfig.RATES)).filter(fl => df.schema.names.contains(fl._1)).toMap
+    val fieldsAndRatesAccumulator = fieldsAndRates.map(fl => {
+      fl._1 -> env.getSparkSession.sparkContext.longAccumulator(fl._1)
+    })
+
+    df.foreachPartition(iter => {
+      while (iter.hasNext) {
+        allCount.add(1L)
+        val row = iter.next()
+        fieldsAndRates.map(fl => fl._1).foreach(field => {
+          val accumulator = fieldsAndRatesAccumulator.get(field).get
+          if (row.get(row.fieldIndex(field)) == null) {
+            accumulator.add(1L)
+          } else {
+            accumulator.add(0L)
+          }
+        })
+      }
+    })
+
+    val allCountValue = allCount.value * 1.00d
+    val nullRateValue = fieldsAndRatesAccumulator.map(fl => {
+      (fl._1, fieldsAndRates.getOrDefault(fl._1, 100.00d), fl._2.value, (fl._2.value / allCountValue) * 100d)
+    })
+
+    if (config.hasPath(NullRateConfig.IS_THROWEXCEPTION) && config.getBoolean(NullRateConfig.IS_THROWEXCEPTION)) {
+      nullRateValue.foreach(fv => {
+        if (fv._4 > fv._2) {
+          throw new RuntimeException(s"the field(${fv._1}) null rate(${fv._4}) is lager then the setting(${fv._2})")
+        }
+      })
+    }
+
+    if (config.hasPath(Plugin.RESULT_TABLE_NAME)) {
+      val nullRateRows = nullRateValue.map(fv => {
+        Row(fv._1, fv._2, fv._3, fv._4)
+      }).toSeq
+
+      val schema = new StructType()
+        .add("field_name", DataTypes.StringType)
+        .add("setting_rate", DataTypes.LongType)
+        .add("null_rate", DataTypes.LongType)
+        .add("rate_percent", DataTypes.LongType)
+      env.getSparkSession.createDataset(nullRateRows)(RowEncoder(schema)).createOrReplaceTempView(config.getString(Plugin.RESULT_TABLE_NAME))

Review Comment:
   The `RESULT_TABLE_NAME` shouldn't use in here, SeaTunnel will auto create temp view for Dataset which return by this method, and named `RESULT_TABLE_NAME`, so other plugin can drectly use it. 



##########
docs/en/transform/nullRate.md:
##########
@@ -0,0 +1,67 @@
+# EncryptedPhone
+
+## Description
+
+When there is a large amount of data, the final result will always be greatly affected by the problem of data null value. Therefore, early null value detection is particularly important. For this reason, this function came into being
+
+:::tip
+
+This transform **ONLY** supported by Spark.
+
+:::
+
+## Options
+
+| name                     | type   | required | default value |
+| -------------------------| ------ | -------- | ------------- |
+| fields                   | string | yes      | -             |
+| rates                    | double | yes      | -             |
+| throwException_enable    | boolean| no       | -             |
+| result_table_name        | string | no       | -             |
+
+
+
+### field [string]

Review Comment:
   Should be `string_list`



##########
seatunnel-transforms/seatunnel-transforms-spark/seatunnel-transform-spark-null-rate/src/main/scala/org/apache/seatunnel/spark/transform/NullRateConfig.scala:
##########
@@ -0,0 +1,26 @@
+
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.spark.transform
+
+object NullRateConfig {
+  val FIELDS = "fields"
+  val RATES = "rates"
+  val IS_THROWEXCEPTION = "throwException_enable"

Review Comment:
   `throwException_enable` should be `throw_exception_enable`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org