You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Ruilong Huo (JIRA)" <ji...@apache.org> on 2019/06/02 04:01:00 UTC
[jira] [Assigned] (HAWQ-1722) Core dump due to lock is not released
before reporting errors when exceeding MaxAORelSegFileStatus
[ https://issues.apache.org/jira/browse/HAWQ-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ruilong Huo reassigned HAWQ-1722:
---------------------------------
Assignee: Ruilong Huo (was: Radar Lei)
> Core dump due to lock is not released before reporting errors when exceeding MaxAORelSegFileStatus
> --------------------------------------------------------------------------------------------------
>
> Key: HAWQ-1722
> URL: https://issues.apache.org/jira/browse/HAWQ-1722
> Project: Apache HAWQ
> Issue Type: Bug
> Reporter: Ruilong Huo
> Assignee: Ruilong Huo
> Priority: Major
>
> since the lock is not released before reporting errors, it will leads to panic during transaction abort since the lock will be acquired again. A RWLock is acquired twice in one process will leads to panic.
> There are two occurrences of this bug in this function, one for AO, one for parquet.
> ```
> if (id == NEXT_END_OF_LIST)
> {
> pfree(allfsinfoParquet);
> ereport(ERROR, (errmsg("cannot open more than %d "
> "append-only table segment "
> "files cocurrently",
> MaxAORelSegFileStatus)));
> return false;
> }
> ```
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)