You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@pulsar.apache.org by Apache Pulsar Slack <ap...@gmail.com> on 2020/02/02 09:11:03 UTC

Slack digest for #general - 2020-02-02

2020-02-01 20:09:48 UTC - Guilherme Perinazzo: How does the client deal with nacks and batched messages currently? Does it redeliver every message in the batch if you nack it? Since individual messages in a batch don't have a unique ID
----
2020-02-01 23:58:48 UTC - Eugen: Suppose I have a fixed number of _x_ machines that have dedicated lines through which they receive UDP packets that I would like to feed into Pulsar. Does it make sense to use the Pulsar Connector framework (using the Netty source connector) in this case? It would probably be necessary to use function workers that run separately from brokers. Asked differently: What's the advantage of connectors vs, just plain running standalone producer apps?
----
2020-02-02 03:43:55 UTC - Sijie Guo: Indivial message has an unique id. (Ledger Id, entry id, batch index)
----
2020-02-02 03:44:46 UTC - Sijie Guo: But currently NACK is done at entry level. It didn’t fully leverage the batch index yet. There is an improvement outstanding there 
----
2020-02-02 03:47:10 UTC - Sijie Guo: You don’t need to write any code most of the time when using connectors. Even you develop you own connector, you just focus on the business logic. You don’t need to worry about setting up consumer and producer and all the loadbalance and fault tolerance will be handled by the connector framework. 
----
2020-02-02 04:09:11 UTC - Joe Francis: Basically, in one case  you are running/managing a producer application that  receives some data,  and then publishes it to a Pulsar topic. In the other case, you provide some interface implementation that fetches the data, and Pulsar will run that application (read from source and run a producer to publish to Pulsar) for you, leveraging Pulsar functions.
----
2020-02-02 04:37:01 UTC - Phat Loc: @Phat Loc has joined the channel
----
2020-02-02 07:25:25 UTC - Alex Yaroslavsky: Has anyone seen this Bookkeeper error on Kubernetes before? The node has 32GB of memory and the config is

  PULSAR_MEM: "\"-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024 -XX:+UseG1GC -XX:MaxGCPauseMillis=10 -XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:ParallelGCThreads=32 -XX:ConcGCThreads=32 -XX:G1NewSizePercent=50 -XX:+DisableExplicitGC -XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError -XX:+PerfDisableSharedMem -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintHeapAtGC -verbosegc -XX:G1LogLevel=finest -Xms28g -Xmx28g -XX:MaxDirectMemorySize=28g\""
  dbStorage_writeCacheMaxSizeMb: "2048" # Write cache size (direct memory)
  dbStorage_readAheadCacheMaxSizeMb: "2048" # Read cache size (direct memory)
  dbStorage_rocksDB_blockCacheSize: "4294967296"
  journalMaxSizeMB: "2048"

07:11:48.931 [main] INFO  org.apache.bookkeeper.bookie.Bookie - Using ledger storage: org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage
07:11:48.933 [main] INFO  org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage - Started Db Ledger Storage
07:11:48.933 [main] INFO  org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage -  - Number of directories: 1
07:11:48.933 [main] INFO  org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage -  - Write cache size: 2048 MB
07:11:48.933 [main] INFO  org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage -  - Read Cache: 2048 MB
07:11:48.934 [main] INFO  org.apache.bookkeeper.proto.BookieNettyServer - Shutting down BookieNettyServer
07:11:48.938 [main] ERROR org.apache.bookkeeper.server.Main - Failed to build bookie <http://serverjava.io|serverjava.io>.IOException: Read and write cache sizes exceed the configured max direct memory size
----
2020-02-02 07:42:39 UTC - Alex Yaroslavsky: Don't know if it is relevant, but I also see those prints in the log:
07:39:46.614 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=2007MB
07:39:46.614 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2048MB
07:39:46.615 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=2048MB
----