You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@shardingsphere.apache.org by zh...@apache.org on 2020/07/20 06:54:35 UTC
[shardingsphere-elasticjob] branch master updated: fixes some
document errors. (#1187)
This is an automated email from the ASF dual-hosted git repository.
zhangliang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/shardingsphere-elasticjob.git
The following commit(s) were added to refs/heads/master by this push:
new 0954dce fixes some document errors. (#1187)
0954dce is described below
commit 0954dceb02892a9ed5c73e9e15b0200ddaf2b08a
Author: Zonglei Dong <do...@apache.org>
AuthorDate: Mon Jul 20 14:54:26 2020 +0800
fixes some document errors. (#1187)
* fixes document title, to uppercase.
* fixes document, format table style.
---
.../usage/event-trace/table-structure.en.md | 56 +++++++++++-----------
.../usage/job-api/job-interface.en.md | 2 +-
2 files changed, 29 insertions(+), 29 deletions(-)
diff --git a/docs/content/user-manual/elasticjob-lite/usage/event-trace/table-structure.en.md b/docs/content/user-manual/elasticjob-lite/usage/event-trace/table-structure.en.md
index 2cb150e..9ebd3d2 100644
--- a/docs/content/user-manual/elasticjob-lite/usage/event-trace/table-structure.en.md
+++ b/docs/content/user-manual/elasticjob-lite/usage/event-trace/table-structure.en.md
@@ -6,21 +6,21 @@ chapter = true
The database which is the value of the event tracing property `event_trace_rdb_url` will automatically creates two tables `JOB_EXECUTION_LOG` and `JOB_STATUS_TRACE_LOG` and several indexes.
-## JOB_EXECUTION_LOG columns
-
-| Column name | Column type | Required | Describe |
-| ---------------- |:------------- |:--------- |:----------------------------------------------------- |
-| id | VARCHAR(40) | Yes | Primary key |
-| job_name | VARCHAR(100) | Yes | Job name |
-| task_id | VARCHAR(1000) | Yes | Task name, create new tasks every time the job runs. |
-| hostname | VARCHAR(255) | Yes | Hostname |
-| ip | VARCHAR(50) | Yes | IP |
-| sharding_item | INT | Yes | Sharding item |
+## JOB_EXECUTION_LOG Columns
+
+| Column name | Column type | Required | Describe |
+| ---------------- |:------------- |:--------- |:--------------------------------------------------------------------------------------- |
+| id | VARCHAR(40) | Yes | Primary key |
+| job_name | VARCHAR(100) | Yes | Job name |
+| task_id | VARCHAR(1000) | Yes | Task name, create new tasks every time the job runs. |
+| hostname | VARCHAR(255) | Yes | Hostname |
+| ip | VARCHAR(50) | Yes | IP |
+| sharding_item | INT | Yes | Sharding item |
| execution_source | VARCHAR(20) | Yes | Source of job execution. The value options are `NORMAL_TRIGGER`, `MISFIRE`, `FAILOVER`. |
-| failure_cause | VARCHAR(2000) | No | The reason for execution failure |
-| is_success | BIT | Yes | Execute successfully or not |
-| start_time | TIMESTAMP | Yes | Job start time |
-| complete_time | TIMESTAMP | No | Job end time |
+| failure_cause | VARCHAR(2000) | No | The reason for execution failure |
+| is_success | BIT | Yes | Execute successfully or not |
+| start_time | TIMESTAMP | Yes | Job start time |
+| complete_time | TIMESTAMP | No | Job end time |
`JOB_EXECUTION_LOG` records the execution history of each job.
There are two steps:
@@ -28,21 +28,21 @@ There are two steps:
1. When the job is executed, program will create one record in the `JOB_EXECUTION_LOG`, and all fields except `failure_cause` and `complete_time` are not empty.
1. When the job completes execution, program will update the record, update the columns of `is_success`, `complete_time` and `failure_cause`(if the job execution fails).
-## JOB_STATUS_TRACE_LOG columns
-
-| Column name | Column type | Required | Describe |
-| ---------------- |:--------------|:----------|:------------------------------------------------------------------------------------------------------------- |
-| id | VARCHAR(40) | Yes | Primary key |
-| job_name | VARCHAR(100) | Yes | Job name |
-| original_task_id | VARCHAR(1000) | Yes | Original task name |
-| task_id | VARCHAR(1000) | Yes | Task name |
-| slave_id | VARCHAR(1000) | Yes | Server's name of executing the job. The valve is server's IP for `ElasticJob-Lite`, is `Mesos`'s primary key for `ElasticJob-Cloud`.|
-| source | VARCHAR(50) | Yes | Source of job execution, the value options are `CLOUD_SCHEDULER`, `CLOUD_EXECUTOR`, `LITE_EXECUTOR`. |
-| execution_type | VARCHAR(20) | Yes | Type of job execution, the value options are `NORMAL_TRIGGER`, `MISFIRE`, `FAILOVER`. |
-| sharding_item | VARCHAR(255) | Yes | Collection of sharding item, multiple sharding items are separated by commas. |
+## JOB_STATUS_TRACE_LOG Columns
+
+| Column name | Column type | Required | Describe |
+| ---------------- |:--------------|:----------|:------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| id | VARCHAR(40) | Yes | Primary key |
+| job_name | VARCHAR(100) | Yes | Job name |
+| original_task_id | VARCHAR(1000) | Yes | Original task name |
+| task_id | VARCHAR(1000) | Yes | Task name |
+| slave_id | VARCHAR(1000) | Yes | Server's name of executing the job. The valve is server's IP for `ElasticJob-Lite`, is `Mesos`'s primary key for `ElasticJob-Cloud`. |
+| source | VARCHAR(50) | Yes | Source of job execution, the value options are `CLOUD_SCHEDULER`, `CLOUD_EXECUTOR`, `LITE_EXECUTOR`. |
+| execution_type | VARCHAR(20) | Yes | Type of job execution, the value options are `NORMAL_TRIGGER`, `MISFIRE`, `FAILOVER`. |
+| sharding_item | VARCHAR(255) | Yes | Collection of sharding item, multiple sharding items are separated by commas. |
| state | VARCHAR(20) | Yes | State of job execution, the value options are `TASK_STAGING`, `TASK_RUNNING`, `TASK_FINISHED`, `TASK_KILLED`, `TASK_LOST`, `TASK_FAILED`, `TASK_ERROR`. |
-| message | VARCHAR(2000) | Yes | Message |
-| creation_time | TIMESTAMP | Yes | Create time |
+| message | VARCHAR(2000) | Yes | Message |
+| creation_time | TIMESTAMP | Yes | Create time |
`JOB_STATUS_TRACE_LOG` record the job status changes.
Through the `task_id` of each job, user can query the life cycle and running track of the job status change.
diff --git a/docs/content/user-manual/elasticjob-lite/usage/job-api/job-interface.en.md b/docs/content/user-manual/elasticjob-lite/usage/job-api/job-interface.en.md
index 23be4b9..7c93c2f 100644
--- a/docs/content/user-manual/elasticjob-lite/usage/job-api/job-interface.en.md
+++ b/docs/content/user-manual/elasticjob-lite/usage/job-api/job-interface.en.md
@@ -15,7 +15,7 @@ Through methods such as `getShardingTotalCount()`, `getShardingItem()`, user can
ElasticJob provides two class-based job types which are `Simple` and `Dataflow`; and also provides a type-based job which is `Script`. Users can extend job types by implementing the SPI interface.
-## Simple job
+## Simple Job
It means simple implementation, without any encapsulation type. Need to implement `SimpleJob` interface.
This interface only provides a single method for coverage, and this method will be executed periodically.