You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pinot.apache.org by Pinot Slack Email Digest <sn...@apache.org> on 2021/01/11 02:00:12 UTC

Apache Pinot Daily Email Digest (2021-01-10)

### _#general_

  
 **@romualdo.gobbo:** @romualdo.gobbo has joined the channel  

###  _#random_

  
 **@romualdo.gobbo:** @romualdo.gobbo has joined the channel  

###  _#troubleshooting_

  
 **@gamparohit:** data injection stops in a realtime table after segment flush
threshold time. When i check, no new segments are created and the status of
the one and only segment is shown as consuming.  
**@gamparohit:** this is the table { "tableName":
"eventhandler_pinot_REALTIME", "tableType": "REALTIME", "segmentsConfig": {
"timeColumnName": "createdOn", "replicasPerPartition": "1", "schemaName":
"eventhandler_pinot", "replication": "1", "segmentPushType": "APPEND",
"segmentPushFrequency": "HOURLY" }, "tenants": { "broker": "DefaultTenant",
"server": "DefaultTenant", "tagOverrideConfig": { "realtimeConsuming":
"DefaultTenant_REALTIME", "realtimeCompleted": "DefaultTenant_OFFLINE" } },
"tableIndexConfig": { "invertedIndexColumns": [], "rangeIndexColumns": [],
"autoGeneratedInvertedIndex": false,
"createInvertedIndexDuringSegmentGeneration": false, "bloomFilterColumns": [],
"loadMode": "MMAP", "streamConfigs": { "streamType": "kafka",
"stream.kafka.topic.name": "eventhandler_pinot", "stream.kafka.broker.list":
"**.***.**.***:9092", "stream.kafka.consumer.type": "lowlevel",
"stream.kafka.consumer.prop.auto.offset.reset": "smallest",
"stream.kafka.consumer.factory.class.name":
"org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
"stream.kafka.decoder.class.name":
"org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
"realtime.segment.flush.threshold.rows": "0",
"realtime.segment.flush.threshold.time": "24h",
"realtime.segment.flush.segment.size": "100M" }, "noDictionaryColumns": [],
"onHeapDictionaryColumns": [], "varLengthDictionaryColumns": [],
"enableDefaultStarTree": false, "sortedColumn": [],
"enableDynamicStarTreeCreation": false, "aggregateMetrics": false,
"nullHandlingEnabled": false }, "metadata": {}, "quota": {}, "routing": {},
"query": {}, "ingestionConfig": {} }  
**@g.kishore:** Check the logs, there might be some exception while flushing
the segment  
 **@romualdo.gobbo:** @romualdo.gobbo has joined the channel  

###  _#pinot-s3_

  
 **@pabraham.usa:** Also is it a good idea to make deep storage for fail over
purpose? I assume there will a little bit of delay in pulling segments from
say s3 and starting the server up?  
 **@g.kishore:** What do you mean fail over purpose  
\--------------------------------------------------------------------- To
unsubscribe, e-mail: dev-unsubscribe@pinot.apache.org For additional commands,
e-mail: dev-help@pinot.apache.org