You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Manikumar (JIRA)" <ji...@apache.org> on 2018/06/15 18:05:00 UTC

[jira] [Resolved] (KAFKA-2584) SecurityProtocol enum validation should be removed or relaxed for non-config usages

     [ https://issues.apache.org/jira/browse/KAFKA-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Manikumar resolved KAFKA-2584.
------------------------------
    Resolution: Auto Closed

Closing this as scala clients are deprecated and will be removed in 2.0.0 release.  Please reopen if  the issue still exists

> SecurityProtocol enum validation should be removed or relaxed for non-config usages
> -----------------------------------------------------------------------------------
>
>                 Key: KAFKA-2584
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2584
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.9.0.0
>            Reporter: Joel Koshy
>            Priority: Major
>
> While deploying SSL to our clusters, we had to roll back due to another compatibility issue similar to what we mentioned in passing in other threads/KIP hangouts. i.e., picking up jars between official releases. Fortunately, there is an easy server-side hot-fix we can do internally to work around it. However, I would classify the issue below as a bug since there is little point in doing endpoint type validation (except for config validation).
> What happened here is that some (old) consumers (that do not care about SSL) picked up a Kafka jar that understood multiple endpoints but did not have the SSL feature. The rebalance fails because while creating the Broker objects we are forced to validate all the endpoints.
> Yes the old consumer is going away, but this would affect tools as well. The same issue could also happen on the brokers if we were to upgrade them to include (say) a Kerberos endpoint. So the old brokers would not be able to read the registration of newly upgraded brokers. Well you could get around that by doing two rounds of deployment (one to get the new code, and another to expose the Kerberos endpoint) but that’s inconvenient and I think unnecessary. Although validation makes sense for configs, I think the current validate everywhere is overkill. (i.e., an old consumer, tool or broker should not complain because another broker can talk more protocols.)
> {noformat}
> kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"jmx_port":-1,"timestamp":"1442952770627","endpoints":["PLAINTEXT://<host>:<plaintextport>","SSL://<host>:<sslport>"],"host”:”<host>","version":2,"port”:<port>}
>         at kafka.cluster.Broker$.createBroker(Broker.scala:61)
>         at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:520)
>         at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:518)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at kafka.utils.ZkUtils$.getCluster(ZkUtils.scala:518)
>         at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener
> ...
> Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.kafka.common.protocol.SecurityProtocol.SSL
>         at java.lang.Enum.valueOf(Enum.java:238)
>         at org.apache.kafka.common.protocol.SecurityProtocol.valueOf(SecurityProtocol.java:24)
>         at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:48)
>         at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:74)
>         at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:73)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>         at kafka.cluster.Broker$.createBroker(Broker.scala:73)
>         ... 70 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)