You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Hao Hao (JIRA)" <ji...@apache.org> on 2019/04/04 19:27:00 UTC

[jira] [Commented] (KUDU-2718) master_failover-itest when HMS is enabled is flaky

    [ https://issues.apache.org/jira/browse/KUDU-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16810218#comment-16810218 ] 

Hao Hao commented on KUDU-2718:
-------------------------------

DropTable in HMS is a synchronous call, so I think it should be reflected immediately if we succeeded in dropping the table. It may be DropTable hasn't taken place before CreateTable being retried? But I don't know how DropTable in HMS can take up to ~2 mins.

I also looped the [test 2000 times|http://dist-test.cloudera.org/job?job_id=hao.hao.1554350086.94333], but failed to reproduce the reported error here. Instead I encountered error as {noformat}/data/1/hao/kudu/src/kudu/integration-tests/master_failover-itest.cc:460: Failure
Failed
Bad status: Invalid argument: Error creating table default.table_0 on the master: not enough live tablet servers to create a table with the requested replication factor 3; 2 tablet servers are alive{noformat}
which seems to be the issue described in KUDU-1358. Without the fix for KUDU-1358, to deflake we can retry upon such error.

> master_failover-itest when HMS is enabled is flaky
> --------------------------------------------------
>
>                 Key: KUDU-2718
>                 URL: https://issues.apache.org/jira/browse/KUDU-2718
>             Project: Kudu
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 1.9.0
>            Reporter: Adar Dembo
>            Assignee: Hao Hao
>            Priority: Major
>         Attachments: master_failover-itest.1.txt
>
>
> This was a failure in HmsConfigurations/MasterFailoverTest.TestDeleteTableSync/1, where GetParam() = 2, but it's likely possible in every multi-master test with HMS integration enabled.
> It looks like there was a leader master election at the time that the client tried to create the table being tested. The master managed to create the table in HMS, but then there was a failure replicating in Raft because another master was elected leader. So the client retried the request on a different master, but the HMS piece of CreateTable failed because the HMS already knew about the table.
> Thing is, there's code to roll back the HMS table creation if this happens, so I don't see why the retried CreateTable failed at the HMS with "table already exists". Perhaps this is a case where even though we succeeded in dropping the table from HMS, it doesn't reflect that immediately?
> I'm attaching the full log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)