You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by zj...@apache.org on 2021/04/25 14:28:49 UTC
[zeppelin] branch master updated: [ZEPPELIN-5336] Inaccurate
markdowns in documentations
This is an automated email from the ASF dual-hosted git repository.
zjffdu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/zeppelin.git
The following commit(s) were added to refs/heads/master by this push:
new 3c97c7f [ZEPPELIN-5336] Inaccurate markdowns in documentations
3c97c7f is described below
commit 3c97c7f2386c4014c8659204f6442142ab54537a
Author: cuspymd <cu...@gmail.com>
AuthorDate: Fri Apr 23 11:32:49 2021 +0000
[ZEPPELIN-5336] Inaccurate markdowns in documentations
### What is this PR for?
Fix Inaccurate markdowns and typos in documentations
Improve code highlight
### What type of PR is it?
[Documentation]
### What is the Jira issue?
* https://issues.apache.org/jira/browse/ZEPPELIN-5336
### How should this be tested?
* Checked updated documents locally
### Questions:
* Does the licenses files need update? No
* Is there breaking changes for older versions? No
* Does this needs documentation? No
Author: cuspymd <cu...@gmail.com>
Closes #4100 from cuspymd/fix-documentation and squashes the following commits:
f7c860a28 [cuspymd] Fix missing highligh tag
d388138aa [cuspymd] Improve markdown and code highlight in documents
---
docs/interpreter/spark.md | 10 +++----
docs/quickstart/docker.md | 34 +++++++++++-----------
docs/quickstart/install.md | 6 ++--
docs/usage/interpreter/interpreter_binding_mode.md | 4 +--
docs/usage/other_features/zeppelin_context.md | 2 +-
5 files changed, 29 insertions(+), 27 deletions(-)
diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
index 105cc74..0d3fb00 100644
--- a/docs/interpreter/spark.md
+++ b/docs/interpreter/spark.md
@@ -436,11 +436,11 @@ By default, each sql statement would run sequentially in `%spark.sql`. But you c
2. Configure pools by creating `fairscheduler.xml` under your `SPARK_CONF_DIR`, check the official spark doc [Configuring Pool Properties](http://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties)
3. Set pool property via setting paragraph property. e.g.
-```
-%spark(pool=pool1)
+ ```
+ %spark(pool=pool1)
-sql statement
-```
+ sql statement
+ ```
This pool feature is also available for all versions of scala Spark, PySpark. For SparkR, it is only available starting from 2.3.0.
@@ -478,7 +478,7 @@ you need to enable user impersonation for more security control. In order the en
**Step 1** Enable user impersonation setting hadoop's `core-site.xml`. E.g. if you are using user `zeppelin` to launch Zeppelin, then add the following to `core-site.xml`, then restart both hdfs and yarn.
-```
+```xml
<property>
<name>hadoop.proxyuser.zeppelin.groups</name>
<value>*</value>
diff --git a/docs/quickstart/docker.md b/docs/quickstart/docker.md
index 12d1671..0c6a478 100644
--- a/docs/quickstart/docker.md
+++ b/docs/quickstart/docker.md
@@ -46,7 +46,7 @@ By default, docker provides an interface as a sock file, so you need to modify t
vi `/etc/docker/daemon.json`, Add `tcp://0.0.0.0:2375` to the `hosts` configuration item.
-```
+```json
{
...
"hosts": ["tcp://0.0.0.0:2375","unix:///var/run/docker.sock"]
@@ -60,27 +60,27 @@ vi `/etc/docker/daemon.json`, Add `tcp://0.0.0.0:2375` to the `hosts` configurat
1. Modify these 2 configuration items in `zeppelin-site.xml`.
-```
-<property>
- <name>zeppelin.run.mode</name>
- <value>docker</value>
- <description>'auto|local|k8s|docker'</description>
-</property>
-
-<property>
- <name>zeppelin.docker.container.image</name>
- <value>apache/zeppelin</value>
- <description>Docker image for interpreters</description>
-</property>
+ ```xml
+ <property>
+ <name>zeppelin.run.mode</name>
+ <value>docker</value>
+ <description>'auto|local|k8s|docker'</description>
+ </property>
+
+ <property>
+ <name>zeppelin.docker.container.image</name>
+ <value>apache/zeppelin</value>
+ <description>Docker image for interpreters</description>
+ </property>
```
2. set timezone in zeppelin-env.sh
-Set to the same time zone as the zeppelin server, keeping the time zone in the interpreter docker container the same as the server. E.g, `"America/New_York"` or `"Asia/Shanghai"`
+ Set to the same time zone as the zeppelin server, keeping the time zone in the interpreter docker container the same as the server. E.g, `"America/New_York"` or `"Asia/Shanghai"`
-```
-export DOCKER_TIME_ZONE="America/New_York"
-```
+ ```bash
+ export DOCKER_TIME_ZONE="America/New_York"
+ ```
## Build Zeppelin image manually
diff --git a/docs/quickstart/install.md b/docs/quickstart/install.md
index ec0ccc8..aa14d9f 100644
--- a/docs/quickstart/install.md
+++ b/docs/quickstart/install.md
@@ -90,7 +90,9 @@ docker run -p 8080:8080 --rm --name zeppelin apache/zeppelin:0.9.0
To persist `logs` and `notebook` directories, use the [volume](https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v-read-only) option for docker container.
```bash
-docker run -p 8080:8080 --rm -v $PWD/logs:/logs -v $PWD/notebook:/notebook -e ZEPPELIN_LOG_DIR='/logs' -e ZEPPELIN_NOTEBOOK_DIR='/notebook' --name zeppelin apache/zeppelin:0.9.0
+docker run -p 8080:8080 --rm -v $PWD/logs:/logs -v $PWD/notebook:/notebook \
+ -e ZEPPELIN_LOG_DIR='/logs' -e ZEPPELIN_NOTEBOOK_DIR='/notebook' \
+ --name zeppelin apache/zeppelin:0.9.0
```
If you have trouble accessing `localhost:8080` in the browser, Please clear browser cache.
@@ -119,7 +121,7 @@ bin/zeppelin-daemon.sh upstart
**zeppelin.conf**
-```
+```aconf
description "zeppelin"
start on (local-filesystems and net-device-up IFACE!=lo)
diff --git a/docs/usage/interpreter/interpreter_binding_mode.md b/docs/usage/interpreter/interpreter_binding_mode.md
index 55c819c..a56759a 100644
--- a/docs/usage/interpreter/interpreter_binding_mode.md
+++ b/docs/usage/interpreter/interpreter_binding_mode.md
@@ -76,8 +76,8 @@ So, each note has an absolutely isolated session. (But it is still possible to s
Mode | Each notebook... | Benefits | Disadvantages | Sharing objects
--- | --- | --- | --- | ---
**shared** | Shares a single session in a single interpreter process (JVM) | Low resource utilization and it's easy to share data between notebooks | All notebooks are affected if the interpreter process dies | Can share directly
-**scoped** | Has its own session in the same interpreter process (JVM) | Less resource utilization than isolated mode | All notebooks are affected if the interpreter process dies | Can't share directly, but it's possible to share objects via [ResourcePool](../../interpreter/spark.html#object-exchange))
-**isolated** | Has its own Interpreter Process | One notebook is not affected directly by other notebooks (**per note**) | Can't share data between notebooks easily (**per note**) | Can't share directly, but it's possible to share objects via [ResourcePool](../../interpreter/spark.html#object-exchange))
+**scoped** | Has its own session in the same interpreter process (JVM) | Less resource utilization than isolated mode | All notebooks are affected if the interpreter process dies | Can't share directly, but it's possible to share objects via [ResourcePool](../../interpreter/spark.html#object-exchange)
+**isolated** | Has its own Interpreter Process | One notebook is not affected directly by other notebooks (**per note**) | Can't share data between notebooks easily (**per note**) | Can't share directly, but it's possible to share objects via [ResourcePool](../../interpreter/spark.html#object-exchange)
In the case of the **per user** scope (available in a multi-user environment), Zeppelin manages interpreter sessions on a per user basis rather than a per note basis. For example:
diff --git a/docs/usage/other_features/zeppelin_context.md b/docs/usage/other_features/zeppelin_context.md
index e5db9a2..7b25171 100644
--- a/docs/usage/other_features/zeppelin_context.md
+++ b/docs/usage/other_features/zeppelin_context.md
@@ -64,7 +64,7 @@ So you can put some objects using Scala (in an Apache Spark cell) and read it fr
// Put/Get object from scala
%spark
-val myObject = "hello'
+val myObject = "hello"
z.put("objName", myObject)
z.get("objName")