You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by GitBox <gi...@apache.org> on 2022/05/21 03:28:37 UTC

[GitHub] [incubator-seatunnel] BenJFan commented on a diff in pull request #1936: [Feature] [transform] Use "skip_error_lines" transform sql key to control whether exit the job. Spark new feature

BenJFan commented on code in PR #1936:
URL: https://github.com/apache/incubator-seatunnel/pull/1936#discussion_r878636539


##########
seatunnel-transforms/seatunnel-transforms-spark/seatunnel-transform-spark-sql/src/main/scala/org/apache/seatunnel/spark/transform/Sql.scala:
##########
@@ -25,7 +25,18 @@ import org.apache.spark.sql.{Dataset, Row}
 class Sql extends BaseSparkTransform {
 
   override def process(data: Dataset[Row], env: SparkEnvironment): Dataset[Row] = {
-    env.getSparkSession.sql(config.getString("sql"))
+    try{
+      env.getSparkSession.sql(config.getString("sql"))
+    }catch {
+      case e:Exception =>

Review Comment:
   Did this method work? In my view, this code will not catch exception in data transform, but will catch when sql have error before job execute. If I wrong please let me know, and you return `null` will cause the sink get NPE.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org