You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@plc4x.apache.org by GitBox <gi...@apache.org> on 2020/11/22 18:42:24 UTC

[GitHub] [plc4x] chrisdutz commented on a change in pull request #202: Feature/kafkasink - Add a kafka sink

chrisdutz commented on a change in pull request #202:
URL: https://github.com/apache/plc4x/pull/202#discussion_r528386416



##########
File path: plc4j/integrations/apache-kafka/README.md
##########
@@ -60,7 +66,7 @@ In order to start a Kafka Connect system the following steps have to be performe
         
         bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
 
-### Start a Kafka Connect Worker (Standalone)
+### Start a Kafka Source Connect Worker (Standalone)

Review comment:
       The name of the product is called "Kafka Connect" ... as this is a Source for Kafka Connect, I'd probably name it "Kafka Connect Source Worker"

##########
File path: plc4j/drivers/simulated/src/main/java/org/apache/plc4x/java/simulated/connection/SimulatedDevice.java
##########
@@ -97,12 +97,12 @@ public void set(SimulatedField field, PlcValue value) {
                         break;
                     default:
                         try {
-                            DataItemIO.staticSerialize(value, field.getPlcDataType(), 1, false);
+                            DataItemIO.staticSerialize(value, field.getPlcDataType(), field.getNumberOfElements(), false);
                         } catch (ParseException e) {
                             System.out.printf("Write failed");
                         }
                 }
-                System.out.printf("TEST PLC RANDOM [%s]: %s%n", field.getName(), value.getString());
+                System.out.printf("TEST PLC RANDOM [%s]: %s%n", field.getName(), value.toString());

Review comment:
       We probably shouldn't be using System.out.prlint stuff

##########
File path: plc4j/integrations/apache-kafka/README.md
##########
@@ -76,7 +82,7 @@ If you want to debug the connector, be sure to set some environment variables be
 
 In this case the startup will suspend till an IDE is connected via a remote-debugging session.
 
-### Start Kafka Connect Worker (Distributed Mode)
+### Start Kafka Source Connect Worker (Distributed Mode)

Review comment:
       Same as above

##########
File path: plc4j/integrations/apache-kafka/README.md
##########
@@ -95,3 +101,52 @@ The configuration of the Connectors is then provided via REST interface:
     curl -X POST -H "Content-Type: application/json" --data '{"name": "plc-source-test", "config": {"connector.class":"org.apache.plc4x.kafka.Plc4xSourceConnector", 
     // TODO: Continue here ...
     "tasks.max":"1", "file":"test.sink.txt", "topics":"connect-test" }}' http://localhost:8083/connectors
+
+
+### Start a Kafka Sink Connect Worker (Standalone)

Review comment:
       Here, the thing should be named "Kafka Connect Sink Worker"

##########
File path: plc4j/integrations/apache-kafka/pom.xml
##########
@@ -33,8 +33,17 @@
 
   <properties>
     <kafka.version>2.5.0</kafka.version>
+    <kafka.connect.maven.plugin.version>0.11.3</kafka.connect.maven.plugin.version>
   </properties>
 
+  <repositories>
+    <repository>

Review comment:
       As this is used to download the plugin, we need to also specify a "pluginRepository" (Just a duplicate with different element name) ... otherwise the build will fail for people who haven't downloaded the plugin before.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org