You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pinot.apache.org by Pinot Slack Email Digest <sn...@apache.org> on 2021/03/12 02:00:14 UTC

Apache Pinot Daily Email Digest (2021-03-11)

### _#general_

  
 **@rrepaka123:** @rrepaka123 has joined the channel  
 **@jainendra1607tarun:** @jainendra1607tarun has joined the channel  
 **@ken:** If we want to get the total number of groups for a `group by` , I
assume currently we have to do a separate `distinctcount` or
`distinctcounthll`, right? But if the group by uses multiple columns, what’s
the best approach to getting this total group count?  
**@ken:** We can do `select distinctcount(concat(col1, col2, '|'))` and that
works, but isn’t very fast.  
**@jackie.jxt:** Is the total number of groups very large? If not, you can set
a relatively high `LIMIT` for the `GROUP BY` query or `DISTINCT` query to get
all the groups  
**@ken:** It can be large (e.g. > 16M for a test query I just tried)  
**@jackie.jxt:** Do you need accurate result or approximate is fine?  
**@ken:** approximate is OK  
**@jackie.jxt:** Currently `distinctCount` family only support single column,
we can probably extend that to support multiple columns  
**@jackie.jxt:** `distinctcounthll(concat(col1, col2, '|'))` should be faster  
**@ken:** Yes, that’s what I’ve been using. But I would expect built-in
support for multiple columns to be faster still, yes?  
**@ken:** And there’s no standard SQL support for returning the total number
of groups from the original aggregation query, right?  
**@jackie.jxt:** In SQL, I think you still need to use `count(distinct col1,
col2)` to get the distinct count  
**@jackie.jxt:** Which is mostly equivalent to `DISTINCT(col1, col2)` with
high limit in Pinot  
**@g.kishore:** `distinctcounthll(concat(col1, col2, '|'))` will suffer from
Concat scalar udf  
**@g.kishore:** May be better to enhance distinctCountHLL to take multiple
columns  
**@ken:** Yes, that would be more performant. Should I file an issue?  
**@g.kishore:** Yes  
**@g.kishore:** And you can give it a shot, it’s not hard  

###  _#random_

  
 **@rrepaka123:** @rrepaka123 has joined the channel  
 **@jainendra1607tarun:** @jainendra1607tarun has joined the channel  

###  _#troubleshooting_

  
 **@humengyuk18:** Hi team, I’m using Superset query a pinot table, but the
explore button are disabled for pinot table, other data sources works fine.
Anyone know the reason?  
**@fx19880617:** Hmm, does run work?  
**@fx19880617:** Do you know what’s the call triggered for other data sources
when click on the explorer  
**@humengyuk18:** Run works, and it can return the right result.  
**@humengyuk18:** I don’t know, maybe it’s because I’m using a pre-1.0 version
of superset? What version of Superset officially support pinot?  
**@fx19880617:** it’s from long back 1.0 should have the support  
**@fx19880617:** as long as you installed the pinotdb lib  
 **@rrepaka123:** @rrepaka123 has joined the channel  
 **@jainendra1607tarun:** @jainendra1607tarun has joined the channel  
 **@falexvr:** Guys, is it possible to have realtime tables in pinot streaming
data from two different kafka clusters at the same time?  
 **@falexvr:** I mean, one table per kafka cluster  
 **@falexvr:** Not consuming from two different sources into the same table  
 **@dlavoie:** Yes, kafka configuration is defined per table  
 **@falexvr:** Does it use a cached schema registry? For some reason our
second table is not being able to reach the second kafka cluster's schema
registry  
 **@dlavoie:** Good question to which I don’t have the answer :confused:  
 **@dlavoie:** What error are you observing?  
 **@1705ayush:** I am trying to implement presto using the starburst-presto
docker image and connect it to existing pinot cluster. I start presto using:
```docker run \ \--network pinot-demo \ \--name=presto-starburst \ -p
8000:8080 \ -d starburstdata/presto:350-e.3``` Then I tried adding the
following pinot.properties file at each of the location one by one.
`/etc/presto/catalog/` `data/presto/etc/catalog/`
`/usr/lib/presto/etc/catalog/` ```# pinot.properties connector.name=pinot
pinot.controller-urls=pinot-controller:9000 pinot.controller-rest-
service=pinot-controller:9000 pinot.limit-large-for-segment=1 pinot.allow-
multiple-aggregations=true pinot.use-date-trunc=true pinot.infer-date-type-in-
schema=true pinot.infer-timestamp-type-in-schema=true``` I get the following
errors each time: ```6 errors
io.airlift.bootstrap.ApplicationConfigurationException: Configuration errors:
1) Error: Configuration property 'pinot.allow-multiple-aggregations' was not
used 2) Error: Configuration property 'pinot.controller-rest-service' was not
used 3) Error: Configuration property 'pinot.infer-date-type-in-schema' was
not used 4) Error: Configuration property 'pinot.infer-timestamp-type-in-
schema' was not used 5) Error: Configuration property 'pinot.limit-large-for-
segment' was not used 6) Error: Configuration property 'pinot.use-date-trunc'
was not used 6 errors at
io.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:239) at
io.prestosql.pinot.PinotConnectorFactory.create(PinotConnectorFactory.java:72)
at
io.prestosql.connector.ConnectorManager.createConnector(ConnectorManager.java:354)
at
io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:211)
at
io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:203)
at
io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:189)
at
io.prestosql.metadata.StaticCatalogStore.loadCatalog(StaticCatalogStore.java:88)
at
io.prestosql.metadata.StaticCatalogStore.loadCatalogs(StaticCatalogStore.java:68)
at io.prestosql.server.Server.doStart(Server.java:119) at
io.prestosql.server.Server.lambda$start$0(Server.java:73) at
io.prestosql.$gen.Presto_350_e_3____20210311_155257_1.run(Unknown Source) at
io.prestosql.server.Server.start(Server.java:73) at
com.starburstdata.presto.StarburstPresto.main(StarburstPresto.java:48)``` Any
suggestion, what I could be doing wrong?  
 **@g.kishore:** @elon.azoulay might be able to help you  
**@g.kishore:** Some of those config are old and probably specific to prestodb
not trino  
**@elon.azoulay:** Hi, yes, for information on the starburst pinot connector I
would ask in the trino <#C011C9JHN7R|troubleshooting> slack. Those properties
are not in the trino pinot connector.  
 **@falexvr:** @dlavoie, yeah... it's a cached schema registry, which I'm not
sure it's the source of our error  
**@dlavoie:** when you say: `second table is not being able to reach the
second kafka cluster's schema registry`. What’s the error you are observing?  
**@falexvr:** ```ERROR
[LLRealtimeSegmentDataManager_mls_streams_stream_video_data_updated_v1__0__2__20210311T1512Z]
[mls_streams_stream_video_data_updated_v1__0__2__20210311T1512Z] Exception
while in work org.apache.kafka.common.errors.SerializationException: Error
deserializing Avro message for id 514 Caused by:
javax.net.ssl.SSLHandshakeException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find
valid certification path to requested target at
sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:1.8.0_282] at
sun.security.ssl.TransportContext.fatal(TransportContext.java:324)
~[?:1.8.0_282] at
sun.security.ssl.TransportContext.fatal(TransportContext.java:267)
~[?:1.8.0_282] at
sun.security.ssl.TransportContext.fatal(TransportContext.java:262)
~[?:1.8.0_282] at
sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
~[?:1.8.0_282] at
sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
~[?:1.8.0_282] at
sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
~[?:1.8.0_282] at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
~[?:1.8.0_282] at
sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
~[?:1.8.0_282] at
sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)
~[?:1.8.0_282] at
sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)
~[?:1.8.0_282] at sun.security.ssl.SSLTransport.decode(SSLTransport.java:149)
~[?:1.8.0_282] at
sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1143) ~[?:1.8.0_282]
at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1054)
~[?:1.8.0_282] at
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:394)
~[?:1.8.0_282] at
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
~[?:1.8.0_282] at
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
~[?:1.8.0_282] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1570)
~[?:1.8.0_282] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
~[?:1.8.0_282] at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
~[?:1.8.0_282] at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:352)
~[?:1.8.0_282] at
io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:212)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:256)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:486)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:479)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:177)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:256)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getById(CachedSchemaRegistryClient.java:235)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:107)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:79)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:114)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:120)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:53)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:471)
~[pinot-all-0.7.0-SNAPSHOT-jar-with-
dependencies.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.consumeLoop(LLRealtimeSegmentDataManager.java:402)
~[pinot-all-0.7.0-SNAPSHOT-jar-with-
dependencies.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:538)
[pinot-all-0.7.0-SNAPSHOT-jar-with-
dependencies.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
java.lang.Thread.run(Thread.java:748) [?:1.8.0_282] Caused by:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find
valid certification path to requested target at
sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:456)
~[?:1.8.0_282] at
sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:323)
~[?:1.8.0_282] at
sun.security.validator.Validator.validate(Validator.java:271) ~[?:1.8.0_282]
at
sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:315)
~[?:1.8.0_282] at
sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:223)
~[?:1.8.0_282] at
sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:129)
~[?:1.8.0_282] at
sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:638)
~[?:1.8.0_282] at
sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
~[?:1.8.0_282] at
sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
~[?:1.8.0_282] at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
~[?:1.8.0_282] at
sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
~[?:1.8.0_282] at
sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)
~[?:1.8.0_282] at
sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)
~[?:1.8.0_282] at sun.security.ssl.SSLTransport.decode(SSLTransport.java:149)
~[?:1.8.0_282] at
sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1143) ~[?:1.8.0_282]
at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1054)
~[?:1.8.0_282] at
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:394)
~[?:1.8.0_282] at
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
~[?:1.8.0_282] at
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
~[?:1.8.0_282] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1570)
~[?:1.8.0_282] at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
~[?:1.8.0_282] at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
~[?:1.8.0_282] at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:352)
~[?:1.8.0_282] at
io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:212)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:256)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:486)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:479)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:177)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:256)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getById(CachedSchemaRegistryClient.java:235)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:107)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:79)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:114)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:120)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:53)
~[pinot-confluent-avro-0.7.0-SNAPSHOT-
shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at
org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:471)
~[pinot-all-0.7.0-SNAPSHOT-jar-with-
dependencies.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e]```  
**@falexvr:**  
**@dlavoie:** ```sun.security.validator.ValidatorException: PKIX path building
failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target```  
**@falexvr:** Yep, but all the certs are in place  
**@dlavoie:** You need to add the schema registry root CA in the JKS of your
JVM  
**@falexvr:** And the table's schema is pointing to the right truststore  
**@dlavoie:** It’s going to be verbose but you can activate the SSL debug logs
so you can confirm what is effectively loaded from the truststore.  
**@dlavoie:** If they are getting mixed, you’ll have a trace  
**@falexvr:** Nice, all right  
 **@dlavoie:** Unless the decoder is used by all tables, each one should have
its own cache schema registry  

###  _#segment-write-api_

  
 **@chinmay.cerebro:** @chinmay.cerebro has joined the channel  

###  _#releases_

  
 **@jiatao:** @jiatao has joined the channel  
 **@nachiket.kate:** @nachiket.kate has joined the channel  
 **@mohammedgalalen056:** @mohammedgalalen056 has joined the channel  
 **@nguyenhoanglam1990:** @nguyenhoanglam1990 has joined the channel  
 **@yash.agarwal:** @yash.agarwal has joined the channel  
 **@krishna080:** @krishna080 has joined the channel  
 **@calvin.mwenda:** @calvin.mwenda has joined the channel  
\--------------------------------------------------------------------- To
unsubscribe, e-mail: dev-unsubscribe@pinot.apache.org For additional commands,
e-mail: dev-help@pinot.apache.org