You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@streampipes.apache.org by ri...@apache.org on 2022/03/06 22:20:40 UTC

[incubator-streampipes-website] 02/03: Update 'Extend' section

This is an automated email from the ASF dual-hosted git repository.

riemer pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-streampipes-website.git

commit 38172b4b616eeab59d7c199a2b68455cae2038b2
Author: Dominik Riemer <do...@gmail.com>
AuthorDate: Sun Mar 6 22:52:37 2022 +0100

    Update 'Extend' section
---
 documentation/docs/06_extend-archetypes.md         |  78 +--------
 documentation/docs/06_extend-cli.md                |   4 +
 documentation/docs/06_extend-first-processor.md    |  57 +++++++
 documentation/docs/06_extend-setup.md              |   2 +-
 .../docs/06_extend-tutorial-data-sources.md        | 188 +++++++--------------
 documentation/website/i18n/en.json                 |   4 +
 documentation/website/sidebars.json                |   1 +
 .../static/img/archetype/project_structure.png     | Bin 80662 -> 80662 bytes
 8 files changed, 132 insertions(+), 202 deletions(-)

diff --git a/documentation/docs/06_extend-archetypes.md b/documentation/docs/06_extend-archetypes.md
index f235718..6a9aec7 100644
--- a/documentation/docs/06_extend-archetypes.md
+++ b/documentation/docs/06_extend-archetypes.md
@@ -9,38 +9,27 @@ We use IntelliJ in this tutorial, but it works with any IDE of your choice.
 
 ## Prerequisites
 You need to have Maven installed, further you need an up and running StreamPipes installation on your development computer.
-To ease the configuration of environment variables, we use the IntelliJ [env Plugin](https://plugins.jetbrains.com/plugin/7861-envfile).
-Install this in IntelliJ. The development also works without the plugin, then you have to set the environment variables manually instead of using the env configuration file.
 
 ## Create Project
 To create a new project, we provide multiple Maven Archteypes.
-Currently, we have archetypes for the JVM and Flink wrappers, each for processors and sinks.
+Currently, we provide archetypes for standalone Java-based microservices and archetypes for the experimental Flink wrapper.
 The commands required to create a new pipeline element project can be found below. Make sure that you select a version compatible with your StreamPipes installation.
 Copy the command into your terminal to create a new project.
 The project will be created in the current folder.
 First, the ``groupId`` of the resulting Maven artifact must be set.
 We use ``groupId``: ``org.example`` and ``artifactId``: ``ExampleProcessor``.
 You can keep the default values for the other settings, confirm them by hitting enter.
-Now, a new folder with the name ``ExampleProcessor`` is generated.
 
 The current {sp.version} is 0.69.0 (for a pre-release version, use the SNAPSHOT appendix, e.g. 0.69.0-SNAPSHOT)
 
 ```bash
 mvn archetype:generate                              	 	     \
   -DarchetypeGroupId=org.apache.streampipes          			         \
-  -DarchetypeArtifactId=streampipes-archetype-pe-processors-jvm  \
+  -DarchetypeArtifactId=streampipes-archetype-extensions-jvm  \
   -DarchetypeVersion={sp.version}
 ```
 <details class="info">
-    <summary>Select: [Processors / Sinks] [JVM / Flink]</summary>
-
-## Processors JVM
-```bash
-mvn archetype:generate                              	 	     \
-  -DarchetypeGroupId=org.apache.streampipes          			         \
-  -DarchetypeArtifactId=streampipes-archetype-pe-processors-jvm  \
-  -DarchetypeVersion={sp.version}
-```
+    <summary>Other archetypes</summary>
 
 ## Processors Flink
 ```bash
@@ -50,14 +39,6 @@ mvn archetype:generate                              	 	     \
   -DarchetypeVersion={sp.version}
 ```
 
-## Sinks JVM
-```bash
-mvn archetype:generate                              	 	     \
-  -DarchetypeGroupId=org.apache.streampipes          			         \
-  -DarchetypeArtifactId=streampipes-archetype-pe-sinks-jvm  \
-  -DarchetypeVersion={sp.version}
-```
-
 ## Sinks Flink
 ```bash
 mvn archetype:generate                              	 	     \
@@ -68,61 +49,16 @@ mvn archetype:generate                              	 	     \
 </details>
 
 
-## Edit Processor
+## Project structure
 Open the project in your IDE.
 If everything worked, the structure should look similar to the following image.
-The *config* package contains all the configuration parameters of your processors / sinks.
-In the *main* package, it is defined which processors / sinks you want to activate and the *pe.processor.example* package contains three classes with the application logic.
+In the *main* package, it is defined which processors / sinks you want to activate and the *pe.example* package contains two skeletons for creating a data processor and sink.
 For details, have a look at the other parts of the Developer Guide, where these classes are explained in more depth.
 
 <img src="/docs/img/archetype/project_structure.png" width="30%" alt="Project Structure">
 
-Open the class *Example* and edit the ``onEvent`` method to print the incoming event, log it to the console and send it to the next component without changing it.
-
-```java
-@Override
-public void onEvent(Event event, SpOutputCollector collector) {
-    // Print the incoming event on the console
-    System.out.println(event);
-
-    // Hand the incoming event to the output collector without changing it.
-    collector.collect(event);
-}
-```
-
-## Start Processor
-Starting from StreamPipes 0.69.0, the IP address of an extensions service (processor, adapter or sink) will be auto-discovered upon start.
-The auto-discovery is done by the StreamPipes service discovery mechanism and should work for most setups.
-Once you start an extensions service, you will see the chosen IP in printed in the console. Make sure that this IP does not point to localhost (127.0.0.1).
-If you see such an IP or the extensions service complains that it cannot resolve the IP, you can manually set the IP address of the extensions service. You can do so by providing an <code>SP_HOST</code> environment variable.
-
-
-To check if the service is up and running, open the browser on *'localhost:6666'* (or the port defined in the service definition). The machine-readable description of the processor should be visible as shown below.
-
-<img src="/docs/img/archetype/endpoint.png" width="90%" alt="Project Structure">
-
-
-<div class="admonition error">
-<div class="admonition-title">Common Problems</div>
-<p>
-If the service description is not shown on 'localhost:6666', you might have to change the port address.
-This needs to be done in the configuration of your service, further explained in the configurations part of the developer guide.
-
-If the service does not show up in the StreamPipes installation menu, click on 'MANAGE ENDPOINTS' and add 'http://<span></span>YOUR_IP_OR_DNS_NAME:6666'.
-Use the IP or DNS name you provided in the env file.
-After adding the endpoint, a new processor with the name *Example* should show up.
-</p>
-</div>
-
-Now you can go to StreamPipes.
-Your new processor *'Example'* should now show up in the installation menu.
-Install it, then switch to the pipeline view and create a simple pipeline that makes use of your newly created processor.
-In case you opened the StreamPipes installation for the first time, it should have been automatically installed during the setup process.
+## Next steps
 
-<img src="/docs/img/archetype/example_pipeline.png" width="80%" alt="Project Structure">
+Click [here](06_extend-first-processor.md) to learn how to create your first data processor.
 
-Start this pipeline.
-Now you should see logging messages in your console and, once you've created a visualization, you can also see the resulting events of your component in StreamPipes.
 
-Congratulations, you have just created your first processor!
-From here on you can start experimenting and implement your own algorithms.
diff --git a/documentation/docs/06_extend-cli.md b/documentation/docs/06_extend-cli.md
index 546df8d..20ccfbe 100644
--- a/documentation/docs/06_extend-cli.md
+++ b/documentation/docs/06_extend-cli.md
@@ -9,6 +9,10 @@ The StreamPipes command-line interface (CLI) is focused on developers in order t
 * new extensions such as **connect adapters, processors, sinks** or,
 * new core features for **backend** and **ui**.
 
+The main difference between the standard Docker/K8s installation is an improved communication between services running as containers and services running locally for development.
+
+The CLI can be found in the [main repository](https://github.com/apache/incubator-streampipes/tree/master/installer/cli) or in the ``compose/cli`` folder of the downloaded source code.
+
 ## TL;DR
 
 ```bash
diff --git a/documentation/docs/06_extend-first-processor.md b/documentation/docs/06_extend-first-processor.md
new file mode 100644
index 0000000..9c4fef9
--- /dev/null
+++ b/documentation/docs/06_extend-first-processor.md
@@ -0,0 +1,57 @@
+---
+id: extend-first-processor
+title: Your first data processor
+sidebar_label: Your first data processor
+---
+
+In this section, we will explain how to start a pipeline element service and install it using the StreamPipes UI.
+
+Open the class *ExampleDataProcessor* and edit the ``onEvent`` method to print the incoming event, log it to the console and send it to the next component without changing it.
+
+```java
+@Override
+public void onEvent(Event event, SpOutputCollector collector) {
+    // Print the incoming event on the console
+    System.out.println(event);
+
+    // Hand the incoming event to the output collector without changing it.
+    collector.collect(event);
+}
+```
+
+## Start Processor
+Starting from StreamPipes 0.69.0, the IP address of an extensions service (processor, adapter or sink) will be auto-discovered upon start.
+The auto-discovery is done by the StreamPipes service discovery mechanism and should work for most setups.
+Once you start an extensions service, you will see the chosen IP in printed in the console. Make sure that this IP does not point to localhost (127.0.0.1).
+If you see such an IP or the extensions service complains that it cannot resolve the IP, you can manually set the IP address of the extensions service. You can do so by providing an <code>SP_HOST</code> environment variable.
+
+
+To check if the service is up and running, open the browser on *'localhost:8090'* (or the port defined in the service definition). The machine-readable description of the processor should be visible as shown below.
+
+<img src="/docs/img/archetype/endpoint.png" width="90%" alt="Project Structure">
+
+
+<div class="admonition error">
+<div class="admonition-title">Common Problems</div>
+<p>
+If the service description is not shown on 'localhost:8090', you might have to change the port address.
+This needs to be done in the configuration of your service, further explained in the configurations part of the developer guide.
+
+If the service does not show up in the StreamPipes installation menu, click on 'MANAGE ENDPOINTS' and add 'http://<span></span>YOUR_IP_OR_DNS_NAME:8090'.
+Use the IP or DNS name you provided as the SP_HOST variable or the IP (if resolvable) found by the auto-discovery service printed in the console.
+After adding the endpoint, a new processor with the name *Example* should show up.
+</p>
+</div>
+
+Now you can go to StreamPipes.
+Your new processor *'Example'* should now show up in the installation menu ("Install Pipeline Elements" in the left navigation bar).
+Install it, then switch to the pipeline view and create a simple pipeline that makes use of your newly created processor.
+In case you opened the StreamPipes installation for the first time, it should have been automatically installed during the setup process.
+
+<img src="/docs/img/archetype/example_pipeline.png" width="80%" alt="Project Structure">
+
+Start this pipeline.
+Now you should see logging messages in your console and, once you've created a visualization, you can also see the resulting events of your component in StreamPipes.
+
+Congratulations, you have just created your first processor!
+From here on you can start experimenting and implement your own algorithms.
diff --git a/documentation/docs/06_extend-setup.md b/documentation/docs/06_extend-setup.md
index 8630ad8..e2c471a 100644
--- a/documentation/docs/06_extend-setup.md
+++ b/documentation/docs/06_extend-setup.md
@@ -24,7 +24,7 @@ Instead of starting from scratch, we recommend using our provided maven archetyp
 
 ### Maven archetypes
 
-Create the Maven archetype as described in the [Getting Started](06_extend-archetypes.md) guide.
+Create the Maven archetype as described in the [Maven Archetypes](06_extend-archetypes.md) guide.
 
 ### Examples
 
diff --git a/documentation/docs/06_extend-tutorial-data-sources.md b/documentation/docs/06_extend-tutorial-data-sources.md
index 13a6cdf..1c01df9 100644
--- a/documentation/docs/06_extend-tutorial-data-sources.md
+++ b/documentation/docs/06_extend-tutorial-data-sources.md
@@ -26,25 +26,17 @@ In the following section, we show how to describe this stream in a form that all
 
 ## Project setup
 
-Instead of creating a new project from scratch, we recommend to use the Maven archetype to create a new project skeleton.
+Instead of creating a new project from scratch, we recommend to use the Maven archetype to create a new project skeleton (streampipes-archetype-extensions-jvm).
 Enter the following command in a command line of your choice (Apache Maven needs to be installed):
 
 ```
 mvn archetype:generate \
--DarchetypeGroupId=org.apache.streampipes -DarchetypeArtifactId=streampipes-archetype-pe-sources \
--DarchetypeVersion=0.68.0 -DgroupId=my.groupId \
+-DarchetypeGroupId=org.apache.streampipes -DarchetypeArtifactId=streampipes-archetype-extensions-jvm \
+-DarchetypeVersion=0.69.0 -DgroupId=my.groupId \
 -DartifactId=my-source -DclassNamePrefix=MySource -DpackageName=mypackagename
 ```
 
-Configure the variables ``artifactId`` (which will be the Maven artifactId), ``classNamePrefix`` (which will be the class name of your data stream) and ``packageName``.
-
-For this tutorial, use ``Vehicle`` as ``classNamePrefix``.
-
-Your project will look as follows:
-
-<img src="/docs/img/tutorial-sources/project-structure.PNG" alt="Project Structure">
-
-That's it, go to the next section to learn how to create your first data stream!
+You will see a project structure similar to the structure shown in the [archetypes](06_extend-archetypes.md) section.
 
 <div class="admonition tip">
 <div class="admonition-title">Tip</div>
@@ -55,33 +47,20 @@ That's it, go to the next section to learn how to create your first data stream!
 ## Adding a data stream description
 
 Now we will add a new data stream definition.
-First, open the class `VehicleStream` which should look as follows:
+First, create a new class `MyVehicleStream` which should look as follows:
 
 ```java
 
-package my.groupId.pe.mypackagename;
+package org.apache.streampipes.pe.example;
 
-import org.streampipes.model.SpDataStream;
-import org.streampipes.model.graph.DataSourceDescription;
-import org.streampipes.sdk.builder.DataStreamBuilder;
-import org.streampipes.sdk.helpers.EpProperties;
-import org.streampipes.sdk.helpers.Formats;
-import org.streampipes.sdk.helpers.Protocols;
-import org.streampipes.sources.AbstractAdapterIncludedStream;
+import org.apache.streampipes.model.SpDataStream;
+import org.apache.streampipes.sources.AbstractAdapterIncludedStream;
 
-
-public class MySourceStream extends AbstractAdapterIncludedStream {
+public class MyVehicleStream extends AbstractAdapterIncludedStream {
 
   @Override
-  public SpDataStream declareModel(DataSourceDescription sep) {
-    return DataStreamBuilder.create("my.groupId-mypackagename", "MySource", "")
-            .property(EpProperties.timestampProperty("timestamp"))
-
-            // configure your stream here
-
-            .format(Formats.jsonFormat())
-            .protocol(Protocols.kafka("localhost", 9092, "TOPIC_SHOULD_BE_CHANGED"))
-            .build();
+  public SpDataStream declareModel() {
+    return null;
   }
 
   @Override
@@ -122,17 +101,17 @@ These four _event properties_ compose our _event schema_. An event property must
 
 In order to complete the minimum required specification of an event stream, we need to provide information on the transport format and protocol of the data stream at runtime.
 
-This can be achieved by extending the builder with the respective properties (which should already have been auto-generated):
+This can be achieved by extending the builder with the respective properties:
 ```java
 .format(Formats.jsonFormat())
-.protocol(Protocols.kafka("localhost", 9092, "TOPIC_SHOULD_BE_CHANGED"))
+.protocol(Protocols.kafka("localhost", 9094, "TOPIC_SHOULD_BE_CHANGED"))
 .build();
 ```
 
 Set ``org.streampipes.tutorial.vehicle`` as your new topic by replacing the term ``TOPIC_SHOULD_BE_CHANGED`.
 
 In this example, we defined that the data stream consists of events in a JSON format and that Kafka is used as a message broker to transmit events.
-The last build() method call triggers the construction of the RDF-based data stream definition.
+The last build() method call triggers the construction of the data stream definition.
 
 That's it! In the next section, we will connect the data stream to a source and inspect the generated RDF description.
 
@@ -144,27 +123,24 @@ Let's assume our stream should produce some random values that are sent to Strea
 @Override
   public void executeStream() {
 
-    SpKafkaProducer producer = new SpKafkaProducer("localhost:9092", "TOPIC_SHOULD_BE_CHANGED");
+    SpKafkaProducer producer = new SpKafkaProducer("localhost:9094", "my-topic", Collections.emptyList());
     Random random = new Random();
-    Runnable runnable = new Runnable() {
-      @Override
-      public void run() {
-        for (;;) {
-          JsonObject jsonObject = new JsonObject();
-          jsonObject.addProperty("timestamp", System.currentTimeMillis());
-          jsonObject.addProperty("plateNumber", "KA-FZ 1");
-          jsonObject.addProperty("latitude", random.nextDouble());
-          jsonObject.addProperty("longitude", random.nextDouble());
-
-          producer.publish(jsonObject.toString());
-
-          try {
-            Thread.sleep(1000);
-          } catch (InterruptedException e) {
-            e.printStackTrace();
-          }
-
+    Runnable runnable = () -> {
+      for (;;) {
+        JsonObject jsonObject = new JsonObject();
+        jsonObject.addProperty("timestamp", System.currentTimeMillis());
+        jsonObject.addProperty("plateNumber", "KA-FZ 1");
+        jsonObject.addProperty("latitude", random.nextDouble());
+        jsonObject.addProperty("longitude", random.nextDouble());
+    
+        producer.publish(jsonObject.toString());
+    
+        try {
+        TimeUnit.SECONDS.sleep(1);
+        } catch (InterruptedException e) {
+        e.printStackTrace();
         }
+  
       }
     };
 
@@ -174,108 +150,60 @@ Let's assume our stream should produce some random values that are sent to Strea
 
 Change the topic and the URL of your Kafka broker (as stated in the controller).
 
-## Adding a source description
+## Registering the data stream
 
-A data source can be seen like a container for a set of data streams. Usually, a data source includes events that are logically or physically connected.
-For instance, in our example we would add other streams produced by vehicle sensors (such as fuel consumption) to the same data source description.
+You need to register the stream in the service definition. Open the ``Init`` class and register the ``MyVehicleStream``:
 
-Open the class `DataSource` which should look as follows:
 ```java
 
-package my.groupId.pe.mypackagename;
-
-import org.streampipes.container.declarer.DataStreamDeclarer;
-import org.streampipes.container.declarer.SemanticEventProducerDeclarer;
-import org.streampipes.model.graph.DataSourceDescription;
-import org.streampipes.sdk.builder.DataSourceBuilder;
-
-import java.util.Arrays;
-import java.util.List;
-
-
-public class DataSource implements SemanticEventProducerDeclarer {
-
-  public DataSourceDescription declareModel() {
-    return DataSourceBuilder.create("my.groupId.mypackagename.source", "MySource " +
-        "Source", "")
+  @Override
+  public SpServiceDefinition provideServiceDefinition() {
+    return SpServiceDefinitionBuilder.create("org.apache.streampipes",
+                    "human-readable service name",
+                    "human-readable service description", 8090)
+            .registerPipelineElement(new ExampleDataProcessor())
+            .registerPipelineElement(new ExampleDataSink())
+            .registerPipelineElement(new MyVehicleStream())
+            .registerMessagingFormats(
+                    new JsonDataFormatFactory(),
+                    new CborDataFormatFactory(),
+                    new SmileDataFormatFactory(),
+                    new FstDataFormatFactory())
+            .registerMessagingProtocols(
+                    new SpKafkaProtocolFactory(),
+                    new SpJmsProtocolFactory(),
+                    new SpMqttProtocolFactory())
             .build();
   }
 
-  public List<DataStreamDeclarer> getEventStreams() {
-    return Arrays.asList(new MySourceStream());
-  }
-}
-```
-First, we need to define the source. Similar to data streams, a source consists of an id, a human-readable name and a description.
-Replace the content defined in the `declareModel` method with the following code:
-```java
-return DataSourceBuilder.create("org.streampipes.tutorial.source.vehicle", "Vehicle Source", "A data source that " +
-    "holds event streams produced by vehicles.")
-    .build();
 ```
 
-## Preparing the container
-
-The final step is to define the deployment type of our new data source. In this tutorial, we will create a so-called `StandaloneModelSubmitter`.
-This client will start an embedded web server that provides the description of our data source.
-
-Go to the class `Init` that implements `StandaloneModelSubmitter`, which should look as follows:
-```java
-package my.groupId.main;
-
-import org.streampipes.container.init.DeclarersSingleton;
-import org.streampipes.container.standalone.init.StandaloneModelSubmitter;
-import my.groupId.config.Config;
-import my.groupId.pe.mypackagename.DataSource;
-
-public class Init extends StandaloneModelSubmitter {
-
-  public static void main(String[] args) throws Exception {
-    DeclarersSingleton.getInstance()
-            .add(new DataSource());
-
-    new Init().init(Config.INSTANCE);
-
-  }
-}
-```
-This code adds the `VehicleSource`. Finally, the `init` method is called
-which triggers the generation of the corresponding RDF description and startup of the web server.
-
-<div class="admonition info">
-<div class="admonition-title">Info</div>
-<p>In the example above, we make use of a class `Config`.
-       This class contains both mandatory and additional configuration parameters required by a pipeline element container.
-       These values are stored in the Consul-based key-value store of your StreamPipes installation.
-       The SDK guide contains a detailed manual on managing container configurations.</p>
-</div>
+You can remove the other two example classes if you want.
 
-## Starting the container
+## Starting the service
 
 <div class="admonition tip">
 <div class="admonition-title">Tip</div>
-<p>By default, the container registers itself using the hostname later used by the Docker container, leading to a 404 error when you try to access an RDF description.
-       For local development, we provide an environment file in the ``development`` folder. You can add your hostname here, which will override settings from the Config class.
-       For instance, use the IntelliJ ``EnvFile`` plugin to automatically provide the environment variables upon start.
+<p>Once you start the service, it will register in StreamPipes with the hostname. The hostname will be auto-discovered and should work out-of-the-box.
+In some cases, the detected hostname is not resolvable from within a container (where the core is running). In this case, provide a SP_HOST environment variable to override the auto-discovery.
 </p>
 </div>
 
 Now we are ready to start our first container!
 
-Execute the main method in the class `Main` we've just created, open a web browser and navigate to http://localhost:8090, or change the port according to the value of the ``SP_PORT`` variable in the env file.
+Execute the main method in the class `Init`, open a web browser and navigate to http://localhost:8090, or change the port according to the value of the ``SP_PORT`` variable in the env file.
 
 You should see something as follows:
 
 <img src="/docs/img/tutorial-sources/pe-overview.PNG" alt="Pipeline Element Container Overview">
 
-Click on the link of the data source to see the RDF description of the pipeline element.
+Click on the link of the data source to see the generated description of the pipeline element.
 
-<img src="/docs/img/tutorial-sources/pe-rdf.PNG" alt="Pipeline Element RDF description">
+<img src="/docs/img/tutorial-sources/pe-rdf.PNG" alt="Pipeline Element description">
 
-The container automatically registers itself in the Consul installation of StreamPipes.
+The container automatically registers itself in StreamPipes.
 
-To install the just created element, open the StreamPipes UI and follow the manual provided in the [user guide](../user
--guide-introduction).
+To install the just created element, open the StreamPipes UI and install the source over the ``Install Pipeline Elements`` section.
 
 ## Read more
 
diff --git a/documentation/website/i18n/en.json b/documentation/website/i18n/en.json
index 135f044..818a70a 100644
--- a/documentation/website/i18n/en.json
+++ b/documentation/website/i18n/en.json
@@ -93,6 +93,10 @@
         "title": "StreamPipes CLI",
         "sidebar_label": "StreamPipes CLI"
       },
+      "extend-first-processor": {
+        "title": "Your first data processor",
+        "sidebar_label": "Your first data processor"
+      },
       "extend-sdk-event-model": {
         "title": "SDK Guide: Event Model",
         "sidebar_label": "SDK: Event Model"
diff --git a/documentation/website/sidebars.json b/documentation/website/sidebars.json
index e58a000..15dc995 100644
--- a/documentation/website/sidebars.json
+++ b/documentation/website/sidebars.json
@@ -183,6 +183,7 @@
       "extend-setup",
       "extend-cli",
       "extend-archetypes",
+      "extend-first-processor",
       "extend-tutorial-data-sources",
       "extend-tutorial-data-processors",
       "extend-tutorial-data-sinks",
diff --git a/documentation/website/static/img/archetype/project_structure.png b/documentation/website/static/img/archetype/project_structure.png
index 87c155c..c3e66c9 100644
Binary files a/documentation/website/static/img/archetype/project_structure.png and b/documentation/website/static/img/archetype/project_structure.png differ