You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by Jingsong Li <ji...@gmail.com> on 2022/04/28 06:50:30 UTC

[VOTE] Apache Flink Table Store 0.1.0, release candidate #1

Hi everyone,

Please review and vote on the release candidate #1 for the version 0.1.0 of
Apache Flink Table Store, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

**Release Overview**

As an overview, the release consists of the following:
a) Table Store canonical source distribution, to be deployed to the
release repository at dist.apache.org
b) Maven artifacts to be deployed to the Maven Central Repository

**Staging Areas to Review**

The staging areas containing the above mentioned artifacts are as follows,
for your review:
* All artifacts for a) and b) can be found in the corresponding dev
repository at dist.apache.org [2]
* All artifacts for c) can be found at the Apache Nexus Repository [3]
* Pre Bundled Binaries Jar can work fine with quick start [4][5]

All artifacts are signed with the key
2C2B6A653B07086B65E4369F7C76245E0A318150 [6]

Other links for your review:
* JIRA release notes [7]
* source code tag "release-0.1.0-rc1" [8]
* PR to update the website Downloads page to include Table Store
links [9]

**Vote Duration**

The voting time will run for at least 72 hours.
It is adopted by majority approval, with at least 3 PMC affirmative votes.

Best,
Jingsong Lee

[1] https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Table+Store+Release
[2] https://dist.apache.org/repos/dist/dev/flink/flink-table-store-0.1.0-rc1/
[3] https://repository.apache.org/content/repositories/orgapacheflink-1501/
[4] https://repository.apache.org/content/repositories/orgapacheflink-1501/org/apache/flink/flink-table-store-dist/0.1.0/flink-table-store-dist-0.1.0.jar
[5] https://nightlies.apache.org/flink/flink-table-store-docs-release-0.1/docs/try-table-store/quick-start/
[6] https://dist.apache.org/repos/dist/release/flink/KEYS
[7] https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351234
[8] https://github.com/apache/flink-table-store/tree/release-0.1.0-rc1
[9] https://github.com/apache/flink-web/pull/531

Re: [VOTE] Apache Flink Table Store 0.1.0, release candidate #1

Posted by Jingsong Li <ji...@gmail.com>.
Thanks Caizhi for the quick validation.

Let's cancel this RC and wait for your fix.

Best,
Jingsong

On Thu, Apr 28, 2022 at 2:55 PM Caizhi Weng <ts...@gmail.com> wrote:
>
> Hi all!
>
> -1 for this release. Currently we cannot write array and map types into
> table store due to this commit
> <https://github.com/apache/flink-table-store/commit/3bf8cde932b0f8512b348c86eb412a483039804c>.
> Run the following SQL in SQL client and an exception will be thrown:
>
> create table if not exists tstore2 (
> a ARRAY<STRING>,
> b MAP<BIGINT, STRING>
> ) with (
> 'path' = 'hdfs:///tstore2',
> 'bucket' = '4'
> );
>
> insert into tstore2 values (array['hi','hello',cast(null as
> string),'test'], map[1,'A',2,'BB',100,'CCC']), (cast(null as
> array<string>), cast(null as map<bigint, string>));
>
> The exception stack is
>
> Exception in thread "main"
> org.apache.flink.table.client.SqlClientException: Unexpected exception.
> This is a bug. Please consider filing an issue.
>
> at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>
> at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
>
> Caused by: java.lang.NoClassDefFoundError:
> org/apache/flink/table/planner/plan/utils/SortUtil$
>
> at
> org.apache.flink.table.planner.codegen.GenerateUtils$.generateCompare(GenerateUtils.scala:675)
>
> at
> org.apache.flink.table.planner.codegen.GenerateUtils$.$anonfun$generateRowCompare$1(GenerateUtils.scala:828)
>
> at
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
>
> at
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
>
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:194)
>
> at
> org.apache.flink.table.planner.codegen.GenerateUtils$.generateRowCompare(GenerateUtils.scala:802)
>
> at
> org.apache.flink.table.planner.codegen.sort.ComparatorCodeGenerator$.gen(ComparatorCodeGenerator.scala:53)
>
> at
> org.apache.flink.table.planner.codegen.sort.ComparatorCodeGenerator.gen(ComparatorCodeGenerator.scala)
>
> at
> org.apache.flink.table.store.codegen.CodeGeneratorImpl.generateRecordComparator(CodeGeneratorImpl.java:60)
>
> at
> org.apache.flink.table.store.codegen.CodeGenUtils.generateRecordComparator(CodeGenUtils.java:67)
>
> at
> org.apache.flink.table.store.file.FileStoreImpl.<init>(FileStoreImpl.java:67)
>
> at
> org.apache.flink.table.store.connector.TableStore.buildFileStore(TableStore.java:220)
>
> at
> org.apache.flink.table.store.connector.TableStore.access$100(TableStore.java:81)
>
> at
> org.apache.flink.table.store.connector.TableStore$SinkBuilder.build(TableStore.java:415)
>
> at
> org.apache.flink.table.store.connector.sink.TableStoreSink.lambda$getSinkRuntimeProvider$0(TableStoreSink.java:143)
>
> at
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.applySinkProvider(CommonExecSink.java:446)
>
> at
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:192)
>
> at
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:67)
>
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
>
> at
> org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:82)
>
> at
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>
> at scala.collection.Iterator.foreach(Iterator.scala:937)
>
> at scala.collection.Iterator.foreach$(Iterator.scala:937)
>
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>
> at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>
> at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>
> at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>
> at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>
> at
> org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:81)
>
> at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:181)
>
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1656)
>
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:782)
>
> at
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeModifyOperations$4(LocalExecutor.java:222)
>
> at
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:88)
>
> at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeModifyOperations(LocalExecutor.java:222)
>
> at
> org.apache.flink.table.client.cli.CliClient.callInserts(CliClient.java:600)
>
> at
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:589)
>
> at
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:443)
>
> at
> org.apache.flink.table.client.cli.CliClient.executeOperation(CliClient.java:373)
>
> at
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:330)
>
> at
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:281)
>
> at
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:229)
>
> at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
>
> at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
>
> at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>
> ... 1 more
>
> Caused by: java.lang.ClassNotFoundException:
> org.apache.flink.table.planner.plan.utils.SortUtil$
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>
> at
> org.apache.flink.core.classloading.ComponentClassLoader.loadClassFromOwnerOnly(ComponentClassLoader.java:163)
>
> at
> org.apache.flink.core.classloading.ComponentClassLoader.loadClassFromComponentFirst(ComponentClassLoader.java:157)
>
> at
> org.apache.flink.core.classloading.ComponentClassLoader.loadClass(ComponentClassLoader.java:103)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>
> ... 48 more
>
> Sorry for introducing this bug. I plan to fix it very soon and we also need
> an e2e test case for all supported data types.
>
>
> Jingsong Li <ji...@gmail.com> 于2022年4月28日周四 14:50写道:
>
> > Hi everyone,
> >
> > Please review and vote on the release candidate #1 for the version 0.1.0 of
> > Apache Flink Table Store, as follows:
> >
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > **Release Overview**
> >
> > As an overview, the release consists of the following:
> > a) Table Store canonical source distribution, to be deployed to the
> > release repository at dist.apache.org
> > b) Maven artifacts to be deployed to the Maven Central Repository
> >
> > **Staging Areas to Review**
> >
> > The staging areas containing the above mentioned artifacts are as follows,
> > for your review:
> > * All artifacts for a) and b) can be found in the corresponding dev
> > repository at dist.apache.org [2]
> > * All artifacts for c) can be found at the Apache Nexus Repository [3]
> > * Pre Bundled Binaries Jar can work fine with quick start [4][5]
> >
> > All artifacts are signed with the key
> > 2C2B6A653B07086B65E4369F7C76245E0A318150 [6]
> >
> > Other links for your review:
> > * JIRA release notes [7]
> > * source code tag "release-0.1.0-rc1" [8]
> > * PR to update the website Downloads page to include Table Store
> > links [9]
> >
> > **Vote Duration**
> >
> > The voting time will run for at least 72 hours.
> > It is adopted by majority approval, with at least 3 PMC affirmative votes.
> >
> > Best,
> > Jingsong Lee
> >
> > [1]
> > https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Table+Store+Release
> > [2]
> > https://dist.apache.org/repos/dist/dev/flink/flink-table-store-0.1.0-rc1/
> > [3]
> > https://repository.apache.org/content/repositories/orgapacheflink-1501/
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1501/org/apache/flink/flink-table-store-dist/0.1.0/flink-table-store-dist-0.1.0.jar
> > [5]
> > https://nightlies.apache.org/flink/flink-table-store-docs-release-0.1/docs/try-table-store/quick-start/
> > [6] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [7]
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351234
> > [8] https://github.com/apache/flink-table-store/tree/release-0.1.0-rc1
> > [9] https://github.com/apache/flink-web/pull/531
> >

Re: [VOTE] Apache Flink Table Store 0.1.0, release candidate #1

Posted by Caizhi Weng <ts...@gmail.com>.
Hi all!

-1 for this release. Currently we cannot write array and map types into
table store due to this commit
<https://github.com/apache/flink-table-store/commit/3bf8cde932b0f8512b348c86eb412a483039804c>.
Run the following SQL in SQL client and an exception will be thrown:

create table if not exists tstore2 (
a ARRAY<STRING>,
b MAP<BIGINT, STRING>
) with (
'path' = 'hdfs:///tstore2',
'bucket' = '4'
);

insert into tstore2 values (array['hi','hello',cast(null as
string),'test'], map[1,'A',2,'BB',100,'CCC']), (cast(null as
array<string>), cast(null as map<bigint, string>));

The exception stack is

Exception in thread "main"
org.apache.flink.table.client.SqlClientException: Unexpected exception.
This is a bug. Please consider filing an issue.

at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)

at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)

Caused by: java.lang.NoClassDefFoundError:
org/apache/flink/table/planner/plan/utils/SortUtil$

at
org.apache.flink.table.planner.codegen.GenerateUtils$.generateCompare(GenerateUtils.scala:675)

at
org.apache.flink.table.planner.codegen.GenerateUtils$.$anonfun$generateRowCompare$1(GenerateUtils.scala:828)

at
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)

at
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)

at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:194)

at
org.apache.flink.table.planner.codegen.GenerateUtils$.generateRowCompare(GenerateUtils.scala:802)

at
org.apache.flink.table.planner.codegen.sort.ComparatorCodeGenerator$.gen(ComparatorCodeGenerator.scala:53)

at
org.apache.flink.table.planner.codegen.sort.ComparatorCodeGenerator.gen(ComparatorCodeGenerator.scala)

at
org.apache.flink.table.store.codegen.CodeGeneratorImpl.generateRecordComparator(CodeGeneratorImpl.java:60)

at
org.apache.flink.table.store.codegen.CodeGenUtils.generateRecordComparator(CodeGenUtils.java:67)

at
org.apache.flink.table.store.file.FileStoreImpl.<init>(FileStoreImpl.java:67)

at
org.apache.flink.table.store.connector.TableStore.buildFileStore(TableStore.java:220)

at
org.apache.flink.table.store.connector.TableStore.access$100(TableStore.java:81)

at
org.apache.flink.table.store.connector.TableStore$SinkBuilder.build(TableStore.java:415)

at
org.apache.flink.table.store.connector.sink.TableStoreSink.lambda$getSinkRuntimeProvider$0(TableStoreSink.java:143)

at
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.applySinkProvider(CommonExecSink.java:446)

at
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:192)

at
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:67)

at
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)

at
org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:82)

at
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)

at scala.collection.Iterator.foreach(Iterator.scala:937)

at scala.collection.Iterator.foreach$(Iterator.scala:937)

at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)

at scala.collection.IterableLike.foreach(IterableLike.scala:70)

at scala.collection.IterableLike.foreach$(IterableLike.scala:69)

at scala.collection.AbstractIterable.foreach(Iterable.scala:54)

at scala.collection.TraversableLike.map(TraversableLike.scala:233)

at scala.collection.TraversableLike.map$(TraversableLike.scala:226)

at scala.collection.AbstractTraversable.map(Traversable.scala:104)

at
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:81)

at
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:181)

at
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1656)

at
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:782)

at
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeModifyOperations$4(LocalExecutor.java:222)

at
org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:88)

at
org.apache.flink.table.client.gateway.local.LocalExecutor.executeModifyOperations(LocalExecutor.java:222)

at
org.apache.flink.table.client.cli.CliClient.callInserts(CliClient.java:600)

at
org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:589)

at
org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:443)

at
org.apache.flink.table.client.cli.CliClient.executeOperation(CliClient.java:373)

at
org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:330)

at
org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:281)

at
org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:229)

at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)

at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)

at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)

... 1 more

Caused by: java.lang.ClassNotFoundException:
org.apache.flink.table.planner.plan.utils.SortUtil$

at java.net.URLClassLoader.findClass(URLClassLoader.java:382)

at java.lang.ClassLoader.loadClass(ClassLoader.java:418)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)

at java.lang.ClassLoader.loadClass(ClassLoader.java:351)

at
org.apache.flink.core.classloading.ComponentClassLoader.loadClassFromOwnerOnly(ComponentClassLoader.java:163)

at
org.apache.flink.core.classloading.ComponentClassLoader.loadClassFromComponentFirst(ComponentClassLoader.java:157)

at
org.apache.flink.core.classloading.ComponentClassLoader.loadClass(ComponentClassLoader.java:103)

at java.lang.ClassLoader.loadClass(ClassLoader.java:351)

... 48 more

Sorry for introducing this bug. I plan to fix it very soon and we also need
an e2e test case for all supported data types.


Jingsong Li <ji...@gmail.com> 于2022年4月28日周四 14:50写道:

> Hi everyone,
>
> Please review and vote on the release candidate #1 for the version 0.1.0 of
> Apache Flink Table Store, as follows:
>
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> **Release Overview**
>
> As an overview, the release consists of the following:
> a) Table Store canonical source distribution, to be deployed to the
> release repository at dist.apache.org
> b) Maven artifacts to be deployed to the Maven Central Repository
>
> **Staging Areas to Review**
>
> The staging areas containing the above mentioned artifacts are as follows,
> for your review:
> * All artifacts for a) and b) can be found in the corresponding dev
> repository at dist.apache.org [2]
> * All artifacts for c) can be found at the Apache Nexus Repository [3]
> * Pre Bundled Binaries Jar can work fine with quick start [4][5]
>
> All artifacts are signed with the key
> 2C2B6A653B07086B65E4369F7C76245E0A318150 [6]
>
> Other links for your review:
> * JIRA release notes [7]
> * source code tag "release-0.1.0-rc1" [8]
> * PR to update the website Downloads page to include Table Store
> links [9]
>
> **Vote Duration**
>
> The voting time will run for at least 72 hours.
> It is adopted by majority approval, with at least 3 PMC affirmative votes.
>
> Best,
> Jingsong Lee
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Table+Store+Release
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-table-store-0.1.0-rc1/
> [3]
> https://repository.apache.org/content/repositories/orgapacheflink-1501/
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1501/org/apache/flink/flink-table-store-dist/0.1.0/flink-table-store-dist-0.1.0.jar
> [5]
> https://nightlies.apache.org/flink/flink-table-store-docs-release-0.1/docs/try-table-store/quick-start/
> [6] https://dist.apache.org/repos/dist/release/flink/KEYS
> [7]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351234
> [8] https://github.com/apache/flink-table-store/tree/release-0.1.0-rc1
> [9] https://github.com/apache/flink-web/pull/531
>