You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2021/08/13 09:45:00 UTC

[jira] [Commented] (HUDI-2307) Fix not need for a primary key when delete_partition with spark ds

    [ https://issues.apache.org/jira/browse/HUDI-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17398551#comment-17398551 ] 

ASF GitHub Bot commented on HUDI-2307:
--------------------------------------

liujinhui1994 opened a new pull request #3469:
URL: https://github.com/apache/hudi/pull/3469


   ## *Tips*
   - *Thank you very much for contributing to Apache Hudi.*
   - *Please review https://hudi.apache.org/contribute/how-to-contribute before opening a pull request.*
   
   ## What is the purpose of the pull request
   
   When using delete_partition with ds, an error will be reported if the primary key value does not exist in the dataFrame, we should not rely on the primary key
   
   ## Brief change log
   
   *(for example:)*
     - *Modify AnnotationLocation checkstyle rule in checkstyle.xml*
   
   ## Verify this pull request
   
   *(Please pick either of the following options)*
   
   This pull request is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This pull request is already covered by existing tests, such as *(please describe tests)*.
   
   (or)
   
   This change added tests and can be verified as follows:
   
   *(example:)*
   
     - *Added integration tests for end-to-end.*
     - *Added HoodieClientWriteTest to verify the change.*
     - *Manually verified the change by running a job locally.*
   
   ## Committer checklist
   
    - [ ] Has a corresponding JIRA in PR title & commit
    
    - [ ] Commit message is descriptive of the change
    
    - [ ] CI is green
   
    - [ ] Necessary doc changes done or have another open PR
          
    - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> Fix not need for a primary key when delete_partition with spark ds
> ------------------------------------------------------------------
>
>                 Key: HUDI-2307
>                 URL: https://issues.apache.org/jira/browse/HUDI-2307
>             Project: Apache Hudi
>          Issue Type: Bug
>            Reporter: liujinhui
>            Priority: Major
>             Fix For: 0.9.0
>
>
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyException: recordKey value: "null" for field: "uuid" cannot be null or empty.Caused by: org.apache.hudi.exception.HoodieKeyException: recordKey value: "null" for field: "uuid" cannot be null or empty. at org.apache.hudi.keygen.KeyGenUtils.getRecordKey(KeyGenUtils.java:141) at org.apache.hudi.keygen.SimpleAvroKeyGenerator.getRecordKey(SimpleAvroKeyGenerator.java:50) at org.apache.hudi.keygen.SimpleKeyGenerator.getRecordKey(SimpleKeyGenerator.java:58) at org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$5.apply(HoodieSparkSqlWriter.scala:195) at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$5.apply(HoodieSparkSqlWriter.scala:195) at scala.collection.Iterator$$anon$11.next(Iterator.scala:410) at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
> {code}
>  
> When using delete_partition, we should not rely on the primary key
>  
> aa 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)