You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@openwhisk.apache.org by GitBox <gi...@apache.org> on 2017/11/29 03:44:42 UTC

[GitHub] lee212 opened a new issue #96: Unable to connect to zookeeper.openwhisk:2181

lee212 opened a new issue #96: Unable to connect to zookeeper.openwhisk:2181
URL: https://github.com/apache/incubator-openwhisk-deploy-kube/issues/96
 
 
   Hi,
   
   It seems zookeeper is deployed but kafka is failing to connect.
   
   ```
   $ kubectl get pods,services --all-namespaces=true -o wide
   NAMESPACE   NAME                                   READY     STATUS             RESTARTS   AGE       IP           NODE
   default     po/kube-apiserver-127.0.0.1            1/1       Running            0          9h        127.0.0.1    127.0.0.1
   default     po/kube-controller-manager-127.0.0.1   1/1       Running            0          9h        127.0.0.1    127.0.0.1
   default     po/kube-scheduler-127.0.0.1            1/1       Running            0          9h        127.0.0.1    127.0.0.1
   openwhisk   po/apigateway-3311085443-mbc4c         1/1       Running            0          3h        172.17.0.4   127.0.0.1
   openwhisk   po/controller-0                        0/1       CrashLoopBackOff   79         3h        172.17.0.7   127.0.0.1
   openwhisk   po/controller-1                        0/1       CrashLoopBackOff   79         3h        172.17.0.8   127.0.0.1
   openwhisk   po/couchdb-710020075-gfqdf             1/1       Running            0          8h        172.17.0.2   127.0.0.1
   openwhisk   po/kafka-1473765933-v4p3c              0/1       CrashLoopBackOff   9          25m       172.17.0.6   127.0.0.1
   openwhisk   po/redis-1106165648-zgxcs              1/1       Running            0          3h        172.17.0.3   127.0.0.1
   openwhisk   po/zookeeper-1304892743-0zw8h          1/1       Running            0          29m       172.17.0.5   127.0.0.1
   
   NAMESPACE   NAME             CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE       SELECTOR
   default     svc/kubernetes   10.254.0.1      <none>        443/TCP                      9h        <none>
   openwhisk   svc/apigateway   10.254.90.227   <none>        8080/TCP,9000/TCP            3h        name=apigateway
   openwhisk   svc/controller   None            <none>        8080/TCP                     3h        name=controller
   openwhisk   svc/couchdb      10.254.249.61   <none>        5984/TCP                     8h        name=couchdb
   openwhisk   svc/kafka        10.254.67.71    <none>        9092/TCP                     28m       name=kafka
   openwhisk   svc/redis        10.254.10.167   <none>        6379/TCP                     3h        name=redis
   openwhisk   svc/zookeeper    10.254.144.28   <none>        2181/TCP,2888/TCP,3888/TCP   3h        name=zookeeper
   ```
   
   The log of kafka:
   ```
   waiting for kafka to be available
   + '[' -n 10.254.144.28 ']'
   + ZOOKEEPER_IP=10.254.144.28
   + '[' -n 2181 ']'
   + ZOOKEEPER_PORT=2181
   ++ grep '\skafka-1473765933-v4p3c$' /etc/hosts
   ++ head -n 1
   ++ awk '{print $1}'
   + IP=172.17.0.6
   + '[' -z '' ']'
   + ZOOKEEPER_CONNECTION_STRING=10.254.144.28:2181
   + cat /kafka/config/server.properties.template
   + sed -e 's|{{KAFKA_ADVERTISED_HOST_NAME}}|kafka.openwhisk|g' -e 's|{{KAFKA_ADVERTISED_PORT}}|9092|g' -e 's|{{KAFKA_AUTO_CREATE_TOPICS_ENABLE}}|true|g' -e 's|{{KAFKA_BROKER_ID}}|0|g' -e 's|{{KAFKA_DEFAULT_REPLICATION_FACTOR}}|1|g' -e 's|{{KAFKA_DELETE_TOPIC_ENABLE}}|false|g' -e 's|{{KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS}}|300000|g' -e 's|{{KAFKA_INTER_BROKER_PROTOCOL_VERSION}}|0.10.2.1|g' -e 's|{{KAFKA_LOG_MESSAGE_FORMAT_VERSION}}|0.10.2.1|g' -e 's|{{KAFKA_LOG_RETENTION_HOURS}}|168|g' -e 's|{{KAFKA_NUM_PARTITIONS}}|1|g' -e 's|{{KAFKA_PORT}}|9092|g' -e 's|{{ZOOKEEPER_CHROOT}}||g' -e 's|{{ZOOKEEPER_CONNECTION_STRING}}|10.254.144.28:2181|g' -e 's|{{ZOOKEEPER_CONNECTION_TIMEOUT_MS}}|10000|g' -e 's|{{ZOOKEEPER_SESSION_TIMEOUT_MS}}|10000|g'
   + '[' -z ']'
   + KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=true
   + KAFKA_JMX_OPTS='-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false'
   + KAFKA_JMX_OPTS='-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false'
   + KAFKA_JMX_OPTS='-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=7203'
   + KAFKA_JMX_OPTS='-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=7203 -Djava.rmi.server.hostname=kafka.openwhisk '
   + export KAFKA_JMX_OPTS
   + echo 'Starting kafka'
   + exec /kafka/bin/kafka-server-start.sh /kafka/config/server.properties
   Starting kafka
   waiting for kafka to be available
   waiting for kafka to be available
   waiting for kafka to be available
   waiting for kafka to be available
   [2017-11-29 03:25:42,564] INFO KafkaConfig values:
           advertised.host.name = kafka.openwhisk
           advertised.listeners = null
           advertised.port = 9092
           authorizer.class.name =
           auto.create.topics.enable = true
           auto.leader.rebalance.enable = true
           background.threads = 10
           broker.id = 0
           broker.id.generation.enable = true
           broker.rack = null
           compression.type = producer
           connections.max.idle.ms = 600000
           controlled.shutdown.enable = true
           controlled.shutdown.max.retries = 3
           controlled.shutdown.retry.backoff.ms = 5000
           controller.socket.timeout.ms = 30000
           create.topic.policy.class.name = null
           default.replication.factor = 1
           delete.topic.enable = false
           fetch.purgatory.purge.interval.requests = 1000
           group.max.session.timeout.ms = 300000
           group.min.session.timeout.ms = 6000
           host.name =
           inter.broker.listener.name = null
           inter.broker.protocol.version = 0.10.2.1
           leader.imbalance.check.interval.seconds = 300
           leader.imbalance.per.broker.percentage = 10
           listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
           listeners = null
           log.cleaner.backoff.ms = 15000
           log.cleaner.dedupe.buffer.size = 134217728
           log.cleaner.delete.retention.ms = 86400000
           log.cleaner.enable = true
           log.cleaner.io.buffer.load.factor = 0.9
           log.cleaner.io.buffer.size = 524288
           log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
           log.cleaner.min.cleanable.ratio = 0.5
           log.cleaner.min.compaction.lag.ms = 0
           log.cleaner.threads = 1
           log.cleanup.policy = [delete]
           log.dir = /data
           log.dirs = /data
           log.flush.interval.messages = 9223372036854775807
           log.flush.interval.ms = null
           log.flush.offset.checkpoint.interval.ms = 60000
           log.flush.scheduler.interval.ms = 9223372036854775807
           log.index.interval.bytes = 4096
           log.index.size.max.bytes = 10485760
           log.message.format.version = 0.10.2.1
           log.message.timestamp.difference.max.ms = 9223372036854775807
           log.message.timestamp.type = CreateTime
           log.preallocate = false
           log.retention.bytes = -1
           log.retention.check.interval.ms = 300000
           log.retention.hours = 168
           log.retention.minutes = null
           log.retention.ms = null
           log.roll.hours = 168
           log.roll.jitter.hours = 0
           log.roll.jitter.ms = null
           log.roll.ms = null
           log.segment.bytes = 1073741824
           log.segment.delete.delay.ms = 60000
           max.connections.per.ip = 2147483647
           max.connections.per.ip.overrides =
           message.max.bytes = 1000012
           metric.reporters = []
           metrics.num.samples = 2
           metrics.recording.level = INFO
           metrics.sample.window.ms = 30000
           min.insync.replicas = 1
           num.io.threads = 8
           num.network.threads = 3
           num.partitions = 1
           num.recovery.threads.per.data.dir = 1
           num.replica.fetchers = 1
           offset.metadata.max.bytes = 4096
           offsets.commit.required.acks = -1
           offsets.commit.timeout.ms = 5000
           offsets.load.buffer.size = 5242880
           offsets.retention.check.interval.ms = 600000
           offsets.retention.minutes = 1440
           offsets.topic.compression.codec = 0
           offsets.topic.num.partitions = 50
           offsets.topic.replication.factor = 3
           offsets.topic.segment.bytes = 104857600
           port = 9092
           principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
           producer.purgatory.purge.interval.requests = 1000
           queued.max.requests = 500
           quota.consumer.default = 9223372036854775807
           quota.producer.default = 9223372036854775807
           quota.window.num = 11
           quota.window.size.seconds = 1
           replica.fetch.backoff.ms = 1000
           replica.fetch.max.bytes = 1048576
           replica.fetch.min.bytes = 1
           replica.fetch.response.max.bytes = 10485760
           replica.fetch.wait.max.ms = 500
           replica.high.watermark.checkpoint.interval.ms = 5000
           replica.lag.time.max.ms = 10000
           replica.socket.receive.buffer.bytes = 65536
           replica.socket.timeout.ms = 30000
           replication.quota.window.num = 11
           replication.quota.window.size.seconds = 1
           request.timeout.ms = 30000
           reserved.broker.max.id = 1000
           sasl.enabled.mechanisms = [GSSAPI]
           sasl.kerberos.kinit.cmd = /usr/bin/kinit
           sasl.kerberos.min.time.before.relogin = 60000
           sasl.kerberos.principal.to.local.rules = [DEFAULT]
           sasl.kerberos.service.name = null
           sasl.kerberos.ticket.renew.jitter = 0.05
           sasl.kerberos.ticket.renew.window.factor = 0.8
           sasl.mechanism.inter.broker.protocol = GSSAPI
           security.inter.broker.protocol = PLAINTEXT
           socket.receive.buffer.bytes = 102400
           socket.request.max.bytes = 104857600
           socket.send.buffer.bytes = 102400
           ssl.cipher.suites = null
           ssl.client.auth = none
           ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
           ssl.endpoint.identification.algorithm = null
           ssl.key.password = null
           ssl.keymanager.algorithm = SunX509
           ssl.keystore.location = null
           ssl.keystore.password = null
           ssl.keystore.type = JKS
           ssl.protocol = TLS
           ssl.provider = null
           ssl.secure.random.implementation = null
           ssl.trustmanager.algorithm = PKIX
           ssl.truststore.location = null
           ssl.truststore.password = null
           ssl.truststore.type = JKS
           unclean.leader.election.enable = true
           zookeeper.connect = 10.254.144.28:2181
           zookeeper.connection.timeout.ms = 10000
           zookeeper.session.timeout.ms = 10000
           zookeeper.set.acl = false
           zookeeper.sync.time.ms = 2000
    (kafka.server.KafkaConfig)
   [2017-11-29 03:25:42,621] INFO starting (kafka.server.KafkaServer)
   [2017-11-29 03:25:42,623] INFO Connecting to zookeeper on 10.254.144.28:2181 (kafka.server.KafkaServer)
   [2017-11-29 03:25:42,635] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
   [2017-11-29 03:25:42,641] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,641] INFO Client environment:host.name=kafka-1473765933-v4p3c (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,641] INFO Client environment:java.version=1.8.0_60 (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,641] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,641] INFO Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,641] INFO Client environment:java.class.path=:/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/kafka/bin/../libs/argparse4j-0.7.0.jar:/kafka/bin/../libs/connect-api-0.10.2.1.jar:/kafka/bin/../libs/connect-file-0.10.2.1.jar:/kafka/bin/../libs/connect-json-0.10.2.1.jar:/kafka/bin/../libs/connect-runtime-0.10.2.1.jar:/kafka/bin/../libs/connect-transforms-0.10.2.1.jar:/kafka/bin/../libs/guava-18.0.jar:/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/kafka/bin/../libs/jackson-annotations-2.8.0.jar:/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/kafka/bin/../libs/jackson-core-2.8.5.jar:/kafka/bin/../libs/jackson-databind-2.8.5.jar:/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/kafka/bin/../libs/javassist-3.20.0-GA.jar:/kafka/bin/../libs/javax.annotation-api-1.2.jar:/kafka/bin/../libs/javax.inject-1.jar:/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/kafka/bin/../libs/jersey-client-2.24.jar:/kafka/bin/../libs/jersey-common-2.24.jar:/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/kafka/bin/../libs/jersey-guava-2.24.jar:/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/kafka/bin/../libs/jersey-server-2.24.jar:/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/kafka/bin/../libs/jopt-simple-5.0.3.jar:/kafka/bin/../libs/kafka-clients-0.10.2.1.jar:/kafka/bin/../libs/kafka-log4j-appender-0.10.2.1.jar:/kafka/bin/../libs/kafka-streams-0.10.2.1.jar:/kafka/bin/../libs/kafka-streams-examples-0.10.2.1.jar:/kafka/bin/../libs/kafka-tools-0.10.2.1.jar:/kafka/bin/../libs/kafka_2.12-0.10.2.1-sources.jar:/kafka/bin/../libs/kafka_2.12-0.10.2.1-test-sources.jar:/kafka/bin/../libs/kafka_2.12-0.10.2.1.jar:/kafka/bin/../libs/log4j-1.2.17.jar:/kafka/bin/../libs/lz4-1.3.0.jar:/kafka/bin/../libs/metrics-core-2.2.0.jar:/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/kafka/bin/../libs/reflections-0.9.10.jar:/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/kafka/bin/../libs/scala-library-2.12.1.jar:/kafka/bin/../libs/scala-parser-combinators_2.12-1.0.4.jar:/kafka/bin/../libs/slf4j-api-1.7.21.jar:/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/kafka/bin/../libs/zkclient-0.10.jar:/kafka/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,642] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,642] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
   waiting for kafka to be available
   [2017-11-29 03:25:42,648] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,648] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,648] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,648] INFO Client environment:os.version=3.10.0-693.2.2.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,648] INFO Client environment:user.name=kafka (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,648] INFO Client environment:user.home=/kafka (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,648] INFO Client environment:user.dir=/kafka (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,649] INFO Initiating client connection, connectString=10.254.144.28:2181 sessionTimeout=10000 watcher=org.I0Itec.zkclient.ZkClient@1fe20588 (org.apache.zookeeper.ZooKeeper)
   [2017-11-29 03:25:42,662] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
   [2017-11-29 03:25:42,664] INFO Opening socket connection to server 10.254.144.28/10.254.144.28:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
   [2017-11-29 03:25:42,673] INFO Socket connection established to 10.254.144.28/10.254.144.28:2181, initiating session (org.apache.zookeeper.ClientCnxn)
   [2017-11-29 03:25:42,686] INFO Session establishment complete on server 10.254.144.28/10.254.144.28:2181, sessionid = 0x10121a8f0f0000c, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn)
   [2017-11-29 03:25:42,687] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
   [2017-11-29 03:25:42,804] INFO Cluster ID = v198P3b6SfiQqA-bxeU0KA (kafka.server.KafkaServer)
   [2017-11-29 03:25:42,805] WARN No meta.properties file under dir /data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
   waiting for kafka to be available
   [2017-11-29 03:25:42,884] INFO [ThrottledRequestReaper-Fetch], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
   [2017-11-29 03:25:42,885] INFO [ThrottledRequestReaper-Produce], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
   [2017-11-29 03:25:42,919] INFO Loading logs. (kafka.log.LogManager)
   [2017-11-29 03:25:42,925] INFO Logs loading complete in 6 ms. (kafka.log.LogManager)
   [2017-11-29 03:25:43,008] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
   [2017-11-29 03:25:43,012] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
   waiting for kafka to be available
   [2017-11-29 03:25:43,049] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
   kafka is up and running
   Create health topic
   + echo 'Create health topic'
   ++ kafka-topics.sh --create --topic health --replication-factor 1 --partitions 1 --zookeeper zookeeper.openwhisk:2181 --config retention.bytes=536870912 --config retention.ms=1073741824 --config segment.bytes=3600000
   [2017-11-29 03:25:43,051] INFO [Socket Server on Broker 0], Started 1 acceptor threads (kafka.network.SocketServer)
   [2017-11-29 03:25:43,117] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
   [2017-11-29 03:25:43,119] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
   [2017-11-29 03:25:43,156] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
   [2017-11-29 03:25:43,170] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
   [2017-11-29 03:25:43,170] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
   [2017-11-29 03:25:43,360] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
   [2017-11-29 03:25:43,363] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
   [2017-11-29 03:25:43,364] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
   [2017-11-29 03:25:43,366] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
   [2017-11-29 03:25:43,375] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.GroupCoordinator)
   [2017-11-29 03:25:43,376] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.GroupCoordinator)
   [2017-11-29 03:25:43,378] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
   [2017-11-29 03:25:43,436] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
   [2017-11-29 03:25:43,455] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
   [2017-11-29 03:25:43,474] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
   [2017-11-29 03:25:43,475] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka.openwhisk,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
   [2017-11-29 03:25:43,476] WARN No meta.properties file under dir /data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
   [2017-11-29 03:25:43,506] INFO Kafka version : 0.10.2.1 (org.apache.kafka.common.utils.AppInfoParser)
   [2017-11-29 03:25:43,506] INFO Kafka commitId : e89bffd6b2eff799 (org.apache.kafka.common.utils.AppInfoParser)
   [2017-11-29 03:25:43,506] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
   Exception in thread "main" org.I0Itec.zkclient.exception.ZkException: Unable to connect to zookeeper.openwhisk:2181
           at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
           at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
           at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
           at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
           at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
           at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
           at kafka.admin.TopicCommand$.main(TopicCommand.scala:56)
           at kafka.admin.TopicCommand.main(TopicCommand.scala)
   Caused by: java.net.UnknownHostException: zookeeper.openwhisk: unknown error
           at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
           at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
           at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
           at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
           at java.net.InetAddress.getAllByName(InetAddress.java:1192)
           at java.net.InetAddress.getAllByName(InetAddress.java:1126)
           at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
           at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
           at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
           at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
           ... 7 more
   + OUTPUT=
   + [[ '' == *\a\l\r\e\a\d\y\ \e\x\i\s\t\s* ]]
   + [[ '' == *\C\r\e\a\t\e\d\ \t\o\p\i\c* ]]
   Failed to create heath topic
   + echo 'Failed to create heath topic'
   + exit 1
   ```
   
   I was following the documentation and did not make changes to the deployment yaml files. 
   
   Thank you,
   Hyungro

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services