You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by le...@apache.org on 2022/09/25 14:30:04 UTC

[incubator-linkis-website] branch dev updated: [Doc] add Build Linkis Docker Image

This is an automated email from the ASF dual-hosted git repository.

legendtkl pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new cc43543d2a [Doc] add Build Linkis Docker Image
     new 9bc4e5a8a7 Merge pull request #522 from AaronLinOops/dev
cc43543d2a is described below

commit cc43543d2a39ac038cd84200b8a988092e7e6edf
Author: aaron <aa...@catdb.io>
AuthorDate: Sat Sep 24 00:19:21 2022 +0800

    [Doc] add Build Linkis Docker Image
---
 docs/development/linkis_docker_build_instrument.md | 136 +++++++++++++++++++++
 .../development/linkis_docker_build_instrument.md  | 136 +++++++++++++++++++++
 2 files changed, 272 insertions(+)

diff --git a/docs/development/linkis_docker_build_instrument.md b/docs/development/linkis_docker_build_instrument.md
new file mode 100644
index 0000000000..f8faca2417
--- /dev/null
+++ b/docs/development/linkis_docker_build_instrument.md
@@ -0,0 +1,136 @@
+---
+title: Build Linkis Docker Image
+sidebar_position: 5
+---
+
+## Linkis Image Components
+
+Starting from version 1.3.0, Linkis introduces some Docker images, and the Dockerfile files for all the images are in the `linkis-dist/docker` directory.
+
+Images currently included as below:
+
+### linkis-base
+  
+  - __Dockerfile__: 
+    - File: linkis.Dockerfile
+    - Arguments, which can be overridden with the `-build-arg` option of the `docker build` command: 
+      * JDK_VERSION: JDK version, default is 1.8.0-openjdk
+      * JDK_BUILD_REVISION: JDK release version, default is 1.8.0.332.b09-1.el7_9
+  - __Description__: Linkis service Base image for Linkis service, mainly used for pre-installation of external libraries, initialization of system environment and directory. This image does not need to be updated frequently, and can be used to speed up the creation of Linkis images by using docker's image caching mechanism.
+
+### linkis
+  - __Dockerfile__: 
+    - File Name: linkis.Dockerfile
+    - Arguments:
+      * LINKIS_VERSION: Linkis Version, default is 0.0.0
+      * LINKIS_SYSTEM_USER: System user, default is hadoop 
+      * LINKIS_SYSTEM_UID: System user UID, default is 9001
+      * LINKIS_HOME: Linkis home directory, default is /opt/linkis , the binary packages and various scripts will be deployed here
+      * LINKIS_CONF_DIR: Linkis configuration directory, default is /etc/linkis-conf
+      * LINKIS_LOG_DIR: Linkis log directory, default is /var/logs/linkis
+  - __Description__: Linkis service image, it contains binary packages of all components of Apache Linkis and various scripts.
+
+### linkis-web
+  - __Dockerfile__: 
+    - File Name: linkis.Dockerfile
+    - Arguments:
+      * LINKIS_VERSION: Linkis Version, default is 0.0.0
+      * LINKIS_HOME: Linkis home directory, default is /opt/linkis , Web 相关的包会被放置在 ${LINKIS_HOME}-web 下 
+  - __Description__: Linkis Web Console image, it contains binary packages and various scripts for the Apache Linkis web console, which uses nginx as the web server. 
+
+### linkis-ldh
+  - __Dockerfile__: 
+    - File Name: ldh.Dockerfile
+    - Arguments:
+      * JDK_VERSION: JDK version, default is 1.8.0-openjdk
+      * JDK_BUILD_REVISION: JDK release version, default is 1.8.0.332.b09-1.el7_9
+      * LINKIS_VERSION: Linkis Version, default is 0.0.0
+      * MYSQL_JDBC_VERSION: MySQL JDBC version, default is 8.0.28
+      * HADOOP_VERSION: Apache Hadoop version, default is 2.7.2
+      * HIVE_VERSION: Apache Hive version, default is 2.3.3
+      * SPARK_VERSION:  Apache Spark version, default is 2.4.3
+      * SPARK_HADOOP_VERSION:  Hadoop version suffix of pre-build Apache Spark distrubtion package, default is 2.7. This value cannot be set arbitrarily, and must be consistent with the official release version of Apache Spark, otherwise the relevant component cannot be downloaded automatically.  
+      * FLINK_VERSION:  Apache Flink version, default is 1.12.2
+      * ZOOKEEPER_VERSION:  Apache Zookeeper version, default is 3.5.9
+  - __Description__: LDH is a test-oriented image, LDH image provides a complete, pseudo-distributed mode Apache Hadoop runtime environment, including HDFS, YARN, HIVE, Spark, Flink and Zookeeper. you can easily pull up a full-featured Hadoop environment in the development environment for testing Linkis functionality. The ENTRYPOINT of LDH image is in `linkis-dist/docker/scripts/entry-point-ldh.sh`, some initialization operations, such as format HDFS, are done in this script. 
+
+### Integrate with MySQL JDBC driver
+
+Due to MySQL licensing restrictions, the official Linkis image does not integrate the MySQL JDBC driver, as a result, users need to  by themselves put the MySQL JDBC driver into the container before using the Linkis. To simplify this process, we provide a Dockerfile:
+
+- File Name: linkis-with-mysql-jdbc.Dockerfile
+- Arguments:
+  * LINKIS_IMAGE: Linkis image name with tag, based on which to create a custom image containing the MySQL JDBC driver, default is `linkis:dev`
+  * LINKIS_HOME: Linkis home directory, default is /opt/linkis
+  * MYSQL_JDBC_VERSION: MySQL JDBC version, default is 8.0.28
+
+## Build Linkis Images
+
+> Because some Bash scripts are used in the image creation process, Linkis image packaging is currently only supported under Linux/MaxOS. 
+
+### Building images with Maven
+
+Liniks images can be created using Maven commands. 
+
+1. Build image `linkis` 
+
+    ``` shell
+    # Building a Linkis image without MySQL JDBC
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true
+    # Building a Linkis image contains MySQL JDBC
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.with.jdbc=true
+    ```
+    Note: 
+    * The `linkis-base` image will be built on the first build of the `linkis` image, and will not be rebuilt if the Dockerfile has not been modified;
+    * Due to the syntax of the Maven POM file, `linkis.build.with.jdbc` is a pseudo-boolean arguments, in fact `-Dlinkis.build.with.jdbc=false` is the same as `-Dlinkis.build.with.jdbc=true`, If you want to express `-Dlinkis.build.with.jdbc=false`, please just remove this arguments. Other arguments are similar. 
+
+2. Build image `linkis-web` 
+
+    ``` shell
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true
+    ```
+
+3. Build image `linkis-ldh`
+
+    ``` shell
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.ldh=true
+    ```
+
+    Note: 
+    * In the process of creating this image, we downloaded the pre-built binary distribution of each hadoop components from the official site [Apache Archives](https://archive.apache.org/dist/). However, due to the network environment in China or other nation/region which is slow to visit Apache site, this approach can be very slow. If you have a faster mirror site, you can manually download the corresponding packages from these sites and move them to the following directory `${HOME}/.li [...]
+
+All of the above Arguments can be used in combination, so if you want to build all the images at once, you can use the following command:
+
+``` shell
+$> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true -Dlinkis.build.ldh=true
+```
+
+### Building images with `docker build` command
+
+It is much convenient to build an image with maven, but the build process introduces a lot of repetitive compilation process, which cause the whole process is rather long. If you only adjust the internal structure of the image, such as directory structure, initialization commands, etc., you can use the `docker build` command to quickly build the image for testing after the first time using the Maven command to build the image. 
+
+An example of building a `linkis-ldh` image using the `docker build` command is as follows:
+
+``` shell
+$> docker build -t linkis-ldh:dev --target linkis-ldh -f linkis-dist/docker/ldh.Dockerfile linkis-dist/target
+
+[+] Building 0.2s (19/19) FINISHED                                                                                                                                                                                      
+ => [internal] load build definition from ldh.Dockerfile               0.0s
+ => => transferring dockerfile: 41B                                    0.0s
+ => [internal] load .dockerignore                                      0.0s
+ => => transferring context: 2B                                        0.0s
+ => [internal] load metadata for docker.io/library/centos:7            0.0s
+ => [ 1/14] FROM docker.io/library/centos:7                            0.0s
+ => [internal] load build context                                      0.0s
+ => => transferring context: 1.93kB                                    0.0s
+ => CACHED [ 2/14] RUN useradd -r -s ...                               0.0s
+ => CACHED [ 3/14] RUN yum install -y       ...                        0.0s
+ ...
+ => CACHED [14/14] RUN chmod +x /usr/bin/start-all.sh                  0.0s
+ => exporting to image                                                 0.0s
+ => => exporting layers                                                0.0s
+ => => writing image sha256:aa3bde0a31bf704413fb75673fc2894b03a0840473d8fe15e2d7f7dd22f1f854     0.0s
+ => => naming to docker.io/library/linkis-ldh:dev 
+```
+
+For other images, please refer to the relevant profile in `linkis-dist/pom.xml`.
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/linkis_docker_build_instrument.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/linkis_docker_build_instrument.md
new file mode 100644
index 0000000000..7fb0d0631f
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/linkis_docker_build_instrument.md
@@ -0,0 +1,136 @@
+---
+title: Linkis Docker 镜像打包
+sidebar_position: 5
+---
+
+## Linkis 镜像组成
+
+从1.3.0版本起,Linkis引入了一些 Docker 镜像,项目所有镜像的 Dockerfile 文件都在`linkis-dist/docker`目录下。
+
+目前由如下几个镜像组成:
+
+### linkis-base
+  
+  - __Dockerfile__: 
+    - 文件名: linkis.Dockerfile
+    - 参数, 可以通过 `docker build` 命令的 `--build-arg` 参数来重载:
+      * JDK_VERSION: JDK 版本,默认为 1.8.0-openjdk
+      * JDK_BUILD_REVISION: JDK 发布版本, 默认为 1.8.0.332.b09-1.el7_9
+  - __说明__: Linkis服务基础镜像,主要用于预安装 Linkis 系统需要的外部库,初始化系统环境和目录。本镜像不需要经常更新,利用 docker 的镜像缓存机制,可以加速 Linkis 镜像的制作。
+
+### linkis
+  - __Dockerfile__: 
+    - 文件名: linkis.Dockerfile
+    - 参数:
+      * LINKIS_VERSION: Linkis 版本号,默认为 0.0.0
+      * LINKIS_SYSTEM_USER: 系统用户,默认为 hadoop 
+      * LINKIS_SYSTEM_UID: 系统用户UID, 默认为 9001
+      * LINKIS_HOME: Linkis 主目录,默认为 /opt/linkis , 系统的二进制包和各类脚本会部署到这里
+      * LINKIS_CONF_DIR: Linkis 配置文件目录,默认为 /etc/linkis-conf
+      * LINKIS_LOG_DIR: Linkis 日志目录,默认为 /var/logs/linkis
+  - __说明__: Linkis服务镜像,镜像中包含了 Apache Linkis 的所有组件的二进制包和各类脚本。
+
+### linkis-web
+  - __Dockerfile__: 
+    - 文件名: linkis.Dockerfile
+    - 参数:
+      * LINKIS_VERSION: Linkis 版本号,默认为 0.0.0
+      * LINKIS_HOME: Linkis 主目录,默认为 /opt/linkis , Web 相关的包会被放置在 ${LINKIS_HOME}-web 下 
+  - __说明__: Linkis Web 控制台镜像,镜像中包含了 Apache Linkis Web 控制台的的二进制包和各类脚本,本镜像使用 nginx 作为 Web 服务器。
+
+### linkis-ldh
+  - __Dockerfile__: 
+    - 文件名: ldh.Dockerfile
+    - 参数:
+      * JDK_VERSION: JDK 版本,默认为 1.8.0-openjdk
+      * JDK_BUILD_REVISION: JDK 发布版本, 默认为 1.8.0.332.b09-1.el7_9
+      * LINKIS_VERSION: Linkis 版本号,默认为 0.0.0
+      * MYSQL_JDBC_VERSION: MySQL JDBC 版本,默认为 8.0.28
+      * HADOOP_VERSION: Apache Hadoop 组件版本,默认为 2.7.2
+      * HIVE_VERSION: Apache Hive 组件版本,默认为 2.3.3
+      * SPARK_VERSION:  Apache Spark 组件版本,默认为 2.4.3
+      * SPARK_HADOOP_VERSION:  预编译的 Apache Spark 发布包 Hadoop 版本后缀,默认为 2.7,该值不能任意设置,需要和 Apache Spark 官方发布版本保持一致,否则无法自动下载相关组件 
+      * FLINK_VERSION:  Apache Flink 组件版本,默认为 1.12.2
+      * ZOOKEEPER_VERSION:  Apache Zookeeper 组件版本,默认为 3.5.9
+  - __说明__: LDH 是一个面向测试用途的镜像,LDH 镜像提供了一套完整的、伪分布式模式的 Apache Hadoop 运行环境,包含了 HDFS, YARN, HIVE, Spark, Flink 和 Zookeeper, 可以很方便的在开发环境中拉起一个全真的 Hadoop 环境用来测试 Linkis 的功能。LDH 镜像的 ENTRYPOINT 为 `linkis-dist/docker/scripts/entry-point-ldh.sh`,一些初始化操作,如 HDFS 的 format 操作都是在这个脚本中完成的。
+
+### 集成 MySQL JDBC
+
+由于MySQL的许可协议限制,官方发布的 Linkis 镜像没有集成 MySQL JDBC 驱动,用户在使用 Linkis 容器前需要自行将 MySQL JDBC 驱动放置到容器中。为了简化这个过程,我们提供了一个 Dockerfile:
+
+- 文件名: linkis-with-mysql-jdbc.Dockerfile
+- 参数:
+  * LINKIS_IMAGE: Linkis 镜像名,基于这个镜像来制作包含 MySQL JDBC 驱动的自定义镜像, 默认为 `linkis:dev`
+  * LINKIS_HOME: Linkis 主目录,默认为 /opt/linkis
+  * MYSQL_JDBC_VERSION: MySQL JDBC 版本,默认为 8.0.28
+
+## Linkis 镜像制作
+
+> 因为镜像制作过程中使用了一些 Bash 脚本,目前仅支持在 Linux/MaxOS 下完成 Linkis 镜像打包的工作。
+
+### 使用 Maven 构建镜像
+
+Linkis 镜像制作过程都已经集成到项目的 Maven Profile 中,可以通过 Maven 命令实现 Liniks 镜像的制作。
+
+1. 构建 `linkis` 镜像
+
+    ``` shell
+    # 构建不包含 MySQL JDBC 的 Linkis 镜像
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true
+    # 构建包含 MySQL JDBC 的 Linkis 镜像
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.with.jdbc=true
+    ```
+    需要注意:
+    * `linkis-base` 镜像会在 `linkis` 镜像第一次构建时被构建,后续如果 Dockerfile 没有被修改,将不会被重复构建;
+    *  受制于 Maven POM 文件的语法,`linkis.build.with.jdbc` 是一个伪布尔参数,实际上`-Dlinkis.build.with.jdbc=false`和`-Dlinkis.build.with.jdbc=true`是一样的,如希望表达`-Dlinkis.build.with.jdbc=false`,请直接将这个参数去掉。其他参数类似。
+
+2. 构建 `linkis-web` 镜像 
+
+    ``` shell
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true
+    ```
+
+3. 构建 `linkis-ldh` 镜像 
+
+    ``` shell
+    $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.ldh=true
+    ```
+
+    需要注意:
+    * 在制作本镜像的过程中,我们从 [Apache Archives](https://archive.apache.org/dist/) 这个官方站点下载每个hadoop组件的预建二进制发行版。但是,受制于国内的网络环境,这种方式可能会非常缓慢。如果你有更快的站点,你可以手动从这些站点下载相应的包,并将其移动到如下这个目录`${HOME}/.linkis-build-cache` 来解决这个问题。
+
+上述参数都可以组合使用,如希望一次性构建所有镜像,可以使用如下命令:
+
+``` shell
+$> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true -Dlinkis.build.ldh=true
+```
+
+### 使用 docker build 命令构建镜像
+
+使用 Maven 命令构建镜像固然方便,但是构建过程中引入了不少重复的编译过程,整个过程比较漫长。如果仅调整镜像内部结构,如目录结构,初始化命令等,可以在第一次使用 Maven 命令构建镜像后,直接使用`docker build` 命令来快速构建镜像进行测试。
+
+使用 `docker build` 命令构建 linkis-ldh 镜像示例如下:
+
+``` shell
+$> docker build -t linkis-ldh:dev --target linkis-ldh -f linkis-dist/docker/ldh.Dockerfile linkis-dist/target
+
+[+] Building 0.2s (19/19) FINISHED                                                                                                                                                                                      
+ => [internal] load build definition from ldh.Dockerfile               0.0s
+ => => transferring dockerfile: 41B                                    0.0s
+ => [internal] load .dockerignore                                      0.0s
+ => => transferring context: 2B                                        0.0s
+ => [internal] load metadata for docker.io/library/centos:7            0.0s
+ => [ 1/14] FROM docker.io/library/centos:7                            0.0s
+ => [internal] load build context                                      0.0s
+ => => transferring context: 1.93kB                                    0.0s
+ => CACHED [ 2/14] RUN useradd -r -s ...                               0.0s
+ => CACHED [ 3/14] RUN yum install -y       ...                        0.0s
+ ...
+ => CACHED [14/14] RUN chmod +x /usr/bin/start-all.sh                  0.0s
+ => exporting to image                                                 0.0s
+ => => exporting layers                                                0.0s
+ => => writing image sha256:aa3bde0a31bf704413fb75673fc2894b03a0840473d8fe15e2d7f7dd22f1f854     0.0s
+ => => naming to docker.io/library/linkis-ldh:dev 
+```
+
+其他镜像的构建命令请参考 `linkis-dist/pom.xml` 中相关的 profile.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org