You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Ruilong Huo (JIRA)" <ji...@apache.org> on 2019/06/02 04:02:00 UTC

[jira] [Updated] (HAWQ-1722) Core dump due to lock is not released before reporting errors when exceeding MaxAORelSegFileStatus

     [ https://issues.apache.org/jira/browse/HAWQ-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ruilong Huo updated HAWQ-1722:
------------------------------
    Component/s: Core

> Core dump due to lock is not released before reporting errors when exceeding MaxAORelSegFileStatus
> --------------------------------------------------------------------------------------------------
>
>                 Key: HAWQ-1722
>                 URL: https://issues.apache.org/jira/browse/HAWQ-1722
>             Project: Apache HAWQ
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 2.4.0.0
>            Reporter: Ruilong Huo
>            Assignee: Ruilong Huo
>            Priority: Major
>
> since the lock is not released before reporting errors, it will leads to panic during transaction abort since the lock will be acquired again. A RWLock is acquired twice in one process will leads to panic.
> There are two occurrences of this bug in this function, one for AO, one for parquet.
> ```
> 			if (id == NEXT_END_OF_LIST)
> 			{
> 				pfree(allfsinfoParquet);
> 				ereport(ERROR, (errmsg("cannot open more than %d "
> 				      "append-only table segment "
> 				      "files cocurrently",
> 				      MaxAORelSegFileStatus)));
> 				return false;
> 			}
> ```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)