You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/17 06:08:46 UTC

[GitHub] [spark] Yaohua628 opened a new pull request, #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Yaohua628 opened a new pull request, #38683:
URL: https://github.com/apache/spark/pull/38683

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a faster review.
     7. If you want to add a new configuration, please read the guideline first for naming configurations in
        'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the guideline first in
        'core/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   In FileSourceStrategy, we add an Alias node to wrap the file metadata fields (e.g. file_name, file_size) in a NamedStruct ([here](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala#L279)). But `CreateNamedStruct` has an override `nullable` value `false` ([here](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala#L443)), which is different from the `_metadata` struct `nullable` value `true` ([here](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/namedExpressions.scala#L467)).
   
   This PR fixes this, by passing `nullable` to `CreateNamedStruct`.
   
   
   ### Why are the changes needed?
   For stateful streaming, we store the schema in the state store and [check consistency across batches](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateSchemaCompatibilityChecker.scala#L47). To avoid state schema compatibility mismatched, we should keep nullable consistent in `_metadata`.
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   New UT
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027504882


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   Looks like the indentation is consistent at least in the test suite. Please check other test cases using metadataColumnsTest - they are all 2 spaces. The second line is not a parameter but a continuation of test name string.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1319473783

   Maybe simpler to apply `KnownNullable` / `KnownNotNull` against `CreateStruct` to enforce desired nullability? Please refer the change in #35543.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Yaohua628 commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
Yaohua628 commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1319636434

   > If it has been persisted before (like a table), then it's totally fine to write non-nullable data to a nullable column. The optimizer may also optimize a column from nullable to non-nullable, so this will happen from time to time.
   
   Got it, that makes sense! Updated


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Yaohua628 commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
Yaohua628 commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1028407378


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala:
##########
@@ -275,8 +275,13 @@ object FileSourceStrategy extends Strategy with PredicateHelper with Logging {
                 .get.withName(FileFormat.ROW_INDEX)
           }
         }
+        // SPARK-41151: metadata column is not nullable for file sources,
+        // [[CreateNamedStruct]] is also not nullable.

Review Comment:
   Make sense, thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027509435


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -600,7 +600,7 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       val df2 = spark.read.format("json")
         .load(dir.getCanonicalPath + "/target/new-streaming-data-join")
       // Verify self-join results
-      assert(streamQuery2.lastProgress.numInputRows == 4L)
+      assert(streamQuery2.lastProgress.numInputRows == 2L)

Review Comment:
   Off-topic: this is very interesting. Looks like fixing this "enables" ReusedExchange, which somehow makes ProgressReporter pick up the metric from the single leaf node instead of two.
   
   > Before the fix
   
   ```
   == Parsed Logical Plan ==
   WriteToMicroBatchDataSourceV1 FileSink[/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b56e426-a39c-4668-a51f-19bce04c2dd8/target/new-streaming-data-join], 77baa2ac-cc0b-4e01-94ff-ec20c98eb29b, [checkpointLocation=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b56e426-a39c-4668-a51f-19bce04c2dd8/target/checkpoint_join, path=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b56e426-a39c-4668-a51f-19bce04c2dd8/target/new-streaming-data-join], Append, 0
   +- Project [name#2339, age#2340, info#2341, _metadata#2345]
      +- Join Inner, ((((name#2339 = name#2504) AND (age#2340 = age#2505)) AND (info#2341 = info#2506)) AND (_metadata#2345 = _metadata#2507))
         :- Project [name#2339, age#2340, info#2341, _metadata#2345]
         :  +- Project [_metadata#2345, name#2339, age#2340, info#2341, _metadata#2345]
         :     +- Project [name#2517 AS name#2339, age#2518 AS age#2340, info#2519 AS info#2341, _metadata#2529 AS _metadata#2345]
         :        +- Relation [name#2517,age#2518,info#2519,_metadata#2529] json
         +- Project [name#2504, age#2505, info#2506, _metadata#2507]
            +- Project [_metadata#2507, name#2504, age#2505, info#2506, _metadata#2507]
               +- Project [name#2523 AS name#2504, age#2524 AS age#2505, info#2525 AS info#2506, _metadata#2530 AS _metadata#2507]
                  +- Relation [name#2523,age#2524,info#2525,_metadata#2530] json
   
   == Analyzed Logical Plan ==
   name: string, age: int, info: struct<id:bigint,university:string>, _metadata: struct<file_path:string,file_name:string,file_size:bigint,file_modification_time:timestamp>
   WriteToMicroBatchDataSourceV1 FileSink[/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b56e426-a39c-4668-a51f-19bce04c2dd8/target/new-streaming-data-join], 77baa2ac-cc0b-4e01-94ff-ec20c98eb29b, [checkpointLocation=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b56e426-a39c-4668-a51f-19bce04c2dd8/target/checkpoint_join, path=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b56e426-a39c-4668-a51f-19bce04c2dd8/target/new-streaming-data-join], Append, 0
   +- Project [name#2339, age#2340, info#2341, _metadata#2345]
      +- Join Inner, ((((name#2339 = name#2504) AND (age#2340 = age#2505)) AND (info#2341 = info#2506)) AND (_metadata#2345 = _metadata#2507))
         :- Project [name#2339, age#2340, info#2341, _metadata#2345]
         :  +- Project [_metadata#2345, name#2339, age#2340, info#2341, _metadata#2345]
         :     +- Project [name#2517 AS name#2339, age#2518 AS age#2340, info#2519 AS info#2341, _metadata#2529 AS _metadata#2345]
         :        +- Relation [name#2517,age#2518,info#2519,_metadata#2529] json
         +- Project [name#2504, age#2505, info#2506, _metadata#2507]
            +- Project [_metadata#2507, name#2504, age#2505, info#2506, _metadata#2507]
               +- Project [name#2523 AS name#2504, age#2524 AS age#2505, info#2525 AS info#2506, _metadata#2530 AS _metadata#2507]
                  +- Relation [name#2523,age#2524,info#2525,_metadata#2530] json
   
   == Optimized Logical Plan ==
   Project [name#2517, age#2518, info#2519, _metadata#2529]
   +- Join Inner, ((((name#2517 = name#2523) AND (age#2518 = age#2524)) AND (info#2519 = info#2525)) AND (_metadata#2529 = _metadata#2530))
      :- Filter (((isnotnull(name#2517) AND isnotnull(age#2518)) AND isnotnull(info#2519)) AND isnotnull(_metadata#2529))
      :  +- Relation [name#2517,age#2518,info#2519,_metadata#2529] json
      +- Filter (((isnotnull(name#2523) AND isnotnull(age#2524)) AND isnotnull(info#2525)) AND isnotnull(_metadata#2530))
         +- Relation [name#2523,age#2524,info#2525,_metadata#2530] json
   
   == Physical Plan ==
   *(3) Project [name#2517, age#2518, info#2519, _metadata#2529]
   +- StreamingSymmetricHashJoin [name#2517, age#2518, info#2519, _metadata#2529], [name#2523, age#2524, info#2525, _metadata#2530], Inner, condition = [ leftOnly = null, rightOnly = null, both = null, full = null ], state info [ checkpoint = file:/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b56e426-a39c-4668-a51f-19bce04c2dd8/target/checkpoint_join/state, runId = b3233731-bee2-478f-9774-3322b2f88110, opId = 0, ver = 0, numPartitions = 5], 0, 0, state cleanup [ left = null, right = null ], 2
      :- Exchange hashpartitioning(name#2517, age#2518, info#2519, _metadata#2529, 5), ENSURE_REQUIREMENTS, [plan_id=2637]
      :  +- *(1) Filter (((isnotnull(name#2517) AND isnotnull(age#2518)) AND isnotnull(info#2519)) AND isnotnull(_metadata#2529))
      :     +- *(1) Project [name#2517, age#2518, info#2519, named_struct(file_path, file_path#2533, file_name, file_name#2534, file_size, file_size#2535L, file_modification_time, file_modification_time#2536) AS _metadata#2529]
      :        +- FileScan json [name#2517,age#2518,info#2519,file_path#2533,file_name#2534,file_size#2535L,file_modification_time#2536] Batched: false, DataFilters: [isnotnull(name#2517), isnotnull(age#2518), isnotnull(info#2519), isnotnull(_metadata#2529)], Format: JSON, Location: InMemoryFileIndex(1 paths)[file:/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b..., PartitionFilters: [], PushedFilters: [IsNotNull(name), IsNotNull(age), IsNotNull(info)], ReadSchema: struct<name:string,age:int,info:struct<id:bigint,university:string>>
      +- Exchange hashpartitioning(name#2523, age#2524, info#2525, _metadata#2530, 5), ENSURE_REQUIREMENTS, [plan_id=2642]
         +- *(2) Filter (((isnotnull(name#2523) AND isnotnull(age#2524)) AND isnotnull(info#2525)) AND isnotnull(_metadata#2530))
            +- *(2) Project [name#2523, age#2524, info#2525, named_struct(file_path, file_path#2537, file_name, file_name#2538, file_size, file_size#2539L, file_modification_time, file_modification_time#2540) AS _metadata#2530]
               +- FileScan json [name#2523,age#2524,info#2525,file_path#2537,file_name#2538,file_size#2539L,file_modification_time#2540] Batched: false, DataFilters: [isnotnull(name#2523), isnotnull(age#2524), isnotnull(info#2525), isnotnull(_metadata#2530)], Format: JSON, Location: InMemoryFileIndex(1 paths)[file:/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-3b..., PartitionFilters: [], PushedFilters: [IsNotNull(name), IsNotNull(age), IsNotNull(info)], ReadSchema: struct<name:string,age:int,info:struct<id:bigint,university:string>>
   
   ```
   
   > After the fix
   
   ```
   == Parsed Logical Plan ==
   WriteToMicroBatchDataSourceV1 FileSink[/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a5a5839-1a1c-4f13-9a1c-2bd8f65b16ce/target/new-streaming-data-join], d8c57232-267e-436b-ad82-4cf8b7f4849b, [checkpointLocation=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a5a5839-1a1c-4f13-9a1c-2bd8f65b16ce/target/checkpoint_join, path=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a5a5839-1a1c-4f13-9a1c-2bd8f65b16ce/target/new-streaming-data-join], Append, 0
   +- Project [name#2339, age#2340, info#2341, _metadata#2345]
      +- Join Inner, ((((name#2339 = name#2504) AND (age#2340 = age#2505)) AND (info#2341 = info#2506)) AND (_metadata#2345 = _metadata#2507))
         :- Project [name#2339, age#2340, info#2341, _metadata#2345]
         :  +- Project [_metadata#2345, name#2339, age#2340, info#2341, _metadata#2345]
         :     +- Project [name#2523 AS name#2339, age#2524 AS age#2340, info#2525 AS info#2341, _metadata#2529 AS _metadata#2345]
         :        +- Relation [name#2523,age#2524,info#2525,_metadata#2529] json
         +- Project [name#2504, age#2505, info#2506, _metadata#2507]
            +- Project [_metadata#2507, name#2504, age#2505, info#2506, _metadata#2507]
               +- Project [name#2517 AS name#2504, age#2518 AS age#2505, info#2519 AS info#2506, _metadata#2530 AS _metadata#2507]
                  +- Relation [name#2517,age#2518,info#2519,_metadata#2530] json
   
   == Analyzed Logical Plan ==
   name: string, age: int, info: struct<id:bigint,university:string>, _metadata: struct<file_path:string,file_name:string,file_size:bigint,file_modification_time:timestamp>
   WriteToMicroBatchDataSourceV1 FileSink[/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a5a5839-1a1c-4f13-9a1c-2bd8f65b16ce/target/new-streaming-data-join], d8c57232-267e-436b-ad82-4cf8b7f4849b, [checkpointLocation=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a5a5839-1a1c-4f13-9a1c-2bd8f65b16ce/target/checkpoint_join, path=/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a5a5839-1a1c-4f13-9a1c-2bd8f65b16ce/target/new-streaming-data-join], Append, 0
   +- Project [name#2339, age#2340, info#2341, _metadata#2345]
      +- Join Inner, ((((name#2339 = name#2504) AND (age#2340 = age#2505)) AND (info#2341 = info#2506)) AND (_metadata#2345 = _metadata#2507))
         :- Project [name#2339, age#2340, info#2341, _metadata#2345]
         :  +- Project [_metadata#2345, name#2339, age#2340, info#2341, _metadata#2345]
         :     +- Project [name#2523 AS name#2339, age#2524 AS age#2340, info#2525 AS info#2341, _metadata#2529 AS _metadata#2345]
         :        +- Relation [name#2523,age#2524,info#2525,_metadata#2529] json
         +- Project [name#2504, age#2505, info#2506, _metadata#2507]
            +- Project [_metadata#2507, name#2504, age#2505, info#2506, _metadata#2507]
               +- Project [name#2517 AS name#2504, age#2518 AS age#2505, info#2519 AS info#2506, _metadata#2530 AS _metadata#2507]
                  +- Relation [name#2517,age#2518,info#2519,_metadata#2530] json
   
   == Optimized Logical Plan ==
   Project [name#2523, age#2524, info#2525, _metadata#2529]
   +- Join Inner, ((((name#2523 = name#2517) AND (age#2524 = age#2518)) AND (info#2525 = info#2519)) AND (_metadata#2529 = _metadata#2530))
      :- Filter ((isnotnull(name#2523) AND isnotnull(age#2524)) AND isnotnull(info#2525))
      :  +- Relation [name#2523,age#2524,info#2525,_metadata#2529] json
      +- Filter ((isnotnull(name#2517) AND isnotnull(age#2518)) AND isnotnull(info#2519))
         +- Relation [name#2517,age#2518,info#2519,_metadata#2530] json
   
   == Physical Plan ==
   *(3) Project [name#2523, age#2524, info#2525, _metadata#2529]
   +- StreamingSymmetricHashJoin [name#2523, age#2524, info#2525, _metadata#2529], [name#2517, age#2518, info#2519, _metadata#2530], Inner, condition = [ leftOnly = null, rightOnly = null, both = null, full = null ], state info [ checkpoint = file:/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a5a5839-1a1c-4f13-9a1c-2bd8f65b16ce/target/checkpoint_join/state, runId = 649e748e-fc6d-42c0-9acd-babc7809c621, opId = 0, ver = 0, numPartitions = 5], 0, 0, state cleanup [ left = null, right = null ], 2
      :- Exchange hashpartitioning(name#2523, age#2524, info#2525, _metadata#2529, 5), ENSURE_REQUIREMENTS, [plan_id=2637]
      :  +- *(1) Filter ((isnotnull(name#2523) AND isnotnull(age#2524)) AND isnotnull(info#2525))
      :     +- *(1) Project [name#2523, age#2524, info#2525, knownnotnull(named_struct(file_path, file_path#2533, file_name, file_name#2534, file_size, file_size#2535L, file_modification_time, file_modification_time#2536)) AS _metadata#2529]
      :        +- FileScan json [name#2523,age#2524,info#2525,file_path#2533,file_name#2534,file_size#2535L,file_modification_time#2536] Batched: false, DataFilters: [isnotnull(name#2523), isnotnull(age#2524), isnotnull(info#2525)], Format: JSON, Location: InMemoryFileIndex(1 paths)[file:/private/var/folders/r0/34w92ww91n3_5htjqqx4_lrh0000gp/T/spark-1a..., PartitionFilters: [], PushedFilters: [IsNotNull(name), IsNotNull(age), IsNotNull(info)], ReadSchema: struct<name:string,age:int,info:struct<id:bigint,university:string>>
      +- ReusedExchange [name#2517, age#2518, info#2519, _metadata#2530], Exchange hashpartitioning(name#2523, age#2524, info#2525, _metadata#2529, 5), ENSURE_REQUIREMENTS, [plan_id=2637]
   ```
   
   This is definitely an "improvement", but it also shows us the way we collect metrics with DSv1 in microbatch can be also affected by physical planning along with optimization as well. It has been a sort of fragile.
   
   Anyway, even if this happens with DSv2, the number of input rows would have been counted once, so I'd consider this as "correct".



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1322812564

   I see comments are addressed. Nice!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Yaohua628 commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
Yaohua628 commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1319593235

   > shall we change `FileSourceMetadataAttribute`?
   
   I initially thought we could relax this field for some future cases. But yeah, you are right, it seems like it is always not null for file sources. 
   
   But do you think it will cause some compatibility issues? If this `nullable` has been persisted somewhere?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Yaohua628 commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
Yaohua628 commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1028412866


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   👍 Thanks for all the helpful discussions. Confirmed: 2-space indentation is consistent in this suite



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1322855177

   @Yaohua628 Looks like there is a conflict on 3.3 branch. Could you please submit a new PR against 3.3? Thanks in advance!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1322853879

   Thanks! Merging to master/3.3.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027504573


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala:
##########
@@ -275,8 +275,13 @@ object FileSourceStrategy extends Strategy with PredicateHelper with Logging {
                 .get.withName(FileFormat.ROW_INDEX)
           }
         }
+        // SPARK-41151: metadata column is not nullable for file sources,
+        // [[CreateNamedStruct]] is also not nullable.

Review Comment:
   One more thing: This is misleading because this is not true in general.
   > [[CreateNamedStruct]] is also not nullable.
   
   To be clear in this context, it would be better to mention `CreateStruct(structColumns)` explicitly instead of saying `[[CreateNamedStruct]] is also not nullable`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027500899


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   Indentation? We need two more spaces.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027573603


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   That's true. We have been unclear about this. 
   
   However, given that the general principle is to distinguish different sections for readability and we already use `2-space indentation` for **the method body**, I believe what we need is to extend the existing rule by removing `when the parameters don't fit in two lines`.
   
   Mixing some part of the test case name and method body doesn't give us much readability. More worse, it's not extensible because that eventually leads us to use two-space indentation and four-space indentation in Case 1 and Case 2.
   **Case 1**
   ```
   metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
     "between analyzed and executed", schema) { (df, _, _) =>
   ```
   **Case 2**
   ```
   metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
       "long long long long long long long long long long long long long long " +
       "between analyzed and executed", schema) { (df, _, _) =>
   ```
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027573603


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   That's true. We have been unclear about this. Given that the general principle is to distinguish different sections for readability and we already use `2-space indentation` for *the method body*, I believe what we need is to extend the existing rule by removing `when the parameters don't fit in two lines`.
   
   Mixing some part of the test case name and method body doesn't give us much readability. More worse, it's not extensible because that eventually leads us to use two-space indentation and four-space indentation in Case 1 and Case 2.
   **Case 1**
   ```
   metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
     "between analyzed and executed", schema) { (df, _, _) =>
   ```
   **Case 2**
   ```
   metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
       "long long long long long long long long long long long long long long " +
       "between analyzed and executed", schema) { (df, _, _) =>
   ```
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027594911


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   I was also incorrect. This is actually very clear:
   
   https://github.com/databricks/scala-style-guide#indent
   
   > For method and class constructor invocations, use 2 space indentation for its parameters and put each in each line when the parameters don't fit in two lines.
   
   Technically, this is method "invocation", not method "definition".



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Yaohua628 commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
Yaohua628 commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1318133748

   cc: @HeartSaVioR 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027573603


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   That's true. We have been unclear about this. 
   
   However, given that the general principle is to distinguish different sections for readability and we already use `2-space indentation` for **the method body**, I believe what we need is to extend the existing rule by removing `when the parameters don't fit in two lines`.
   
   Mixing some part of the test case name and method body doesn't give us much readability. More worse, it's not extensible because that eventually enforces us to use two-space indentation and four-space indentation separately in Case 1 and Case 2.
   **Case 1**
   ```
   metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
     "between analyzed and executed", schema) { (df, _, _) =>
   ```
   **Case 2**
   ```
   metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
       "long long long long long long long long long long long long long long " +
       "between analyzed and executed", schema) { (df, _, _) =>
   ```
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Yaohua628 commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
Yaohua628 commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1319529815

   > Maybe simpler to apply KnownNullable / KnownNotNull against CreateStruct to enforce desired nullability? Please refer the change in https://github.com/apache/spark/pull/35543.
   
   Wow, good point, thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027545869


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   https://github.com/databricks/scala-style-guide#indent
   
   For method declarations, use 4 space indentation for their parameters and put each in each line **when the parameters don't fit in two lines**. Return types can be either on the same line as the last parameter, or start a new line with 2 space indent.
   
   So the rule of 4 space indentation is for 3+ lines of definition of method. We don't specifically mention about two lines of definition of method. While it's not super clear since we don't have strict guideline, but there is a general rule of spacing `Use 2-space indentation in general.`, which I think it could apply to cases which don't fall to exceptions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] AmplabJenkins commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
AmplabJenkins commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1320929881

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] cloud-fan commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
cloud-fan commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1319609369

   If it has been persisted before (like a table), then it's totally fine to write non-nullable data to a nullable. The optimizer may also optimize a column from nullable to non-nullable, so this will happen from time to time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] cloud-fan commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
cloud-fan commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1319573470

   shall we change `FileSourceMetadataAttribute`? I think the metadata column (at least for file source) is always not nullable.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR commented on pull request #38683: [SPARK-41151][SQL][3.3] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1319473845

   cc. @cloud-fan 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027524453


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   I'm curious why it's not a parameter? For me, the second line is a parameter because is **a part of the first parameter**. And, Apache Spark usually splits parameter definition sections from method definition sections, doesn't it?
   > The second line is not a parameter but a continuation of test name string.
   
   BTW, one thing I agree with @HeartSaVioR that we respect the nearest style in the code in general. So, I want to ask @Yaohua628 and @HeartSaVioR explicitly. Do you want to make this as an Apache Spark coding style officially? What I'm asking is that `the indentation on test case name splitting`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027524453


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   I'm curious why it's not a parameter? For me, the second line is a parameter because is **a part of the first parameter**. And, Apache Spark usually splits parameter definition sections from method definition sections, doesn't it?
   > The second line is not a parameter but a continuation of test name string.
   
   BTW, one thing I agree with @HeartSaVioR that we respect the nearest style in the code in general. So, I want to ask @Yaohua628 and @HeartSaVioR explicitly. Do you want to make this as an Apache Spark coding style officially? What I asking is that `the indentation on test case name splitting`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HeartSaVioR closed pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
HeartSaVioR closed pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent
URL: https://github.com/apache/spark/pull/38683


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] Yaohua628 commented on pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
Yaohua628 commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1322879946

   > @Yaohua628 Looks like there is a conflict on 3.3 branch. Could you please submit a new PR against 3.3? Thanks in advance!
   
   Thanks! Please find here: https://github.com/apache/spark/pull/38748


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027503203


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/namedExpressions.scala:
##########
@@ -464,11 +464,13 @@ object FileSourceMetadataAttribute {
 
   val FILE_SOURCE_METADATA_COL_ATTR_KEY = "__file_source_metadata_col"
 
-  def apply(name: String, dataType: DataType, nullable: Boolean = true): AttributeReference =
-    AttributeReference(name, dataType, nullable,
+  def apply(name: String, dataType: DataType): AttributeReference = {
+    // Metadata column for file sources is always not nullable.
+    AttributeReference(name, dataType, nullable = false,
       new MetadataBuilder()
         .putBoolean(METADATA_COL_ATTR_KEY, value = true)
         .putBoolean(FILE_SOURCE_METADATA_COL_ATTR_KEY, value = true).build())()
+  }

Review Comment:
   Do we need to add this `{}` additionally?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun commented on a diff in pull request #38683: [SPARK-41151][SQL] Keep built-in file `_metadata` column nullable value consistent

Posted by GitBox <gi...@apache.org>.
dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027605394


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with SharedSparkSession {
       }
     }
   }
+
+  metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+    "between analyzed and executed", schema) { (df, _, _) =>

Review Comment:
   Ah, you are right. My bad.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org