You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/01/30 08:00:00 UTC

[jira] [Work logged] (HIVE-25912) Drop external table throw NPE

     [ https://issues.apache.org/jira/browse/HIVE-25912?focusedWorklogId=717643&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-717643 ]

ASF GitHub Bot logged work on HIVE-25912:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 30/Jan/22 07:59
            Start Date: 30/Jan/22 07:59
    Worklog Time Spent: 10m 
      Work Description: baifachuan opened a new pull request #2987:
URL: https://github.com/apache/hive/pull/2987


   ### What changes were proposed in this pull request?
   
   modify the FileUtils.checkDeletePermission function add this check:
   
   ` 
   if(path.getParent() == null) {
     // no file/dir to be deleted, because of the path is root dir, hive table forbid set the location to root dir.
     return;
   }
   `
   If the path.getParent() we can be sure the location is ROOT path. no file/dir to be deleted
   
   
   ### Why are the changes needed?
   If I create an external table using the ROOT path, the table was created successfully, but when I drop the table throw the NPE.  So I can't drop the table forever.
   
   This is not a good phenomenon.
   
   ### Does this PR introduce _any_ user-facing change?
   no
   
   
   ### How was this patch tested?
   mvn test -Dtest=SomeTest --pl common
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 717643)
    Remaining Estimate: 95h 50m  (was: 96h)
            Time Spent: 10m

> Drop external table throw NPE
> -----------------------------
>
>                 Key: HIVE-25912
>                 URL: https://issues.apache.org/jira/browse/HIVE-25912
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 3.1.2
>         Environment: Hive version: 3.1.2
>            Reporter: Fachuan Bai
>            Assignee: Fachuan Bai
>            Priority: Major
>              Labels: metastore
>         Attachments: hive bugs.png
>
>   Original Estimate: 96h
>          Time Spent: 10m
>  Remaining Estimate: 95h 50m
>
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.NullPointerException) (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive ranger plugin(in ranger rep), if the location is root dir return create table fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)