You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pulsar.apache.org by si...@apache.org on 2020/12/09 17:11:46 UTC

[pulsar] branch master updated: [docs]Support generate full docs 2.7.0 (#8859)

This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new ac845aa  [docs]Support generate full docs 2.7.0 (#8859)
ac845aa is described below

commit ac845aa56dbcf42bf4057b78f33dbd5e80c9e494
Author: Guangning <gu...@apache.org>
AuthorDate: Thu Dec 10 01:11:16 2020 +0800

    [docs]Support generate full docs 2.7.0 (#8859)
    
    ### Motivation
    Generate full amount of documentation for version 2.7.0, release manager run the `yarn run version <release-version>`  command to generate the full amount of documentation.
    
    
    ### Modifications
    
    * Add a script `docusaurus-version.js`
    * Update command `yarn run version`
    * Generate full docs for pulsar 2.7.0
---
 site2/website/docusaurus-version.js                | 195 +++++
 site2/website/package.json                         |   2 +-
 .../versioned_docs/version-2.7.0/adaptors-kafka.md | 265 ++++++
 .../versioned_docs/version-2.7.0/adaptors-spark.md |  77 ++
 .../versioned_docs/version-2.7.0/adaptors-storm.md |  91 ++
 .../version-2.7.0/admin-api-namespaces.md          |  28 +
 .../version-2.7.0/admin-api-schemas.md             |   7 +
 .../version-2.7.0/administration-dashboard.md      |  63 ++
 .../version-2.7.0/administration-geo.md            | 157 ++++
 .../version-2.7.0/administration-load-balance.md   | 182 ++++
 .../version-2.7.0/administration-stats.md          |  64 ++
 .../version-2.7.0/administration-upgrade.md        | 151 ++++
 .../version-2.7.0/client-libraries-cgo.md          | 545 ++++++++++++
 .../version-2.7.0/client-libraries-cpp.md          | 253 ++++++
 .../version-2.7.0/client-libraries-dotnet.md       | 430 ++++++++++
 .../version-2.7.0/client-libraries-go.md           | 680 +++++++++++++++
 .../version-2.7.0/client-libraries-websocket.md    |   1 +
 .../version-2.7.0/concepts-clients.md              |  88 ++
 .../version-2.7.0/concepts-overview.md             |  31 +
 .../version-2.7.0/concepts-proxy-sni-routing.md    | 121 +++
 .../version-2.7.0/concepts-replication.md          |   9 +
 .../version-2.7.0/concepts-tiered-storage.md       |  18 +
 .../version-2.7.0/concepts-topic-compaction.md     |  37 +
 .../version-2.7.0/cookbooks-bookkeepermetadata.md  |  21 +
 .../version-2.7.0/cookbooks-encryption.md          | 170 ++++
 .../version-2.7.0/cookbooks-message-queue.md       |  95 ++
 .../version-2.7.0/cookbooks-retention-expiry.md    |  21 +
 .../deploy-bare-metal-multi-cluster.md             | 426 +++++++++
 .../versioned_docs/version-2.7.0/deploy-dcos.md    | 183 ++++
 .../version-2.7.0/deploy-kubernetes.md             |  11 +
 .../versioned_docs/version-2.7.0/developing-cpp.md | 101 +++
 .../version-2.7.0/developing-load-manager.md       | 215 +++++
 .../version-2.7.0/developing-tools.md              | 106 +++
 .../versioned_docs/version-2.7.0/functions-cli.md  | 198 +++++
 .../version-2.7.0/functions-debug.md               | 461 ++++++++++
 .../version-2.7.0/functions-deploy.md              | 211 +++++
 .../version-2.7.0/functions-metrics.md             |   7 +
 .../version-2.7.0/functions-overview.md            | 192 +++++
 .../getting-started-concepts-and-architecture.md   |  16 +
 .../version-2.7.0/getting-started-docker.md        | 161 ++++
 .../version-2.7.0/getting-started-pulsar.md        |  67 ++
 .../version-2.7.0/getting-started-standalone.md    | 226 +++++
 .../versioned_docs/version-2.7.0/helm-install.md   |  41 +
 .../versioned_docs/version-2.7.0/helm-prepare.md   |  85 ++
 .../versioned_docs/version-2.7.0/helm-tools.md     |  43 +
 .../version-2.7.0/io-aerospike-sink.md             |  26 +
 .../version-2.7.0/io-canal-source.md               | 203 +++++
 .../version-2.7.0/io-cassandra-sink.md             |  54 ++
 .../version-2.7.0/io-cdc-debezium.md               | 475 ++++++++++
 .../website/versioned_docs/version-2.7.0/io-cdc.md |  26 +
 .../version-2.7.0/io-debezium-source.md            | 496 +++++++++++
 .../versioned_docs/version-2.7.0/io-debug.md       | 329 +++++++
 .../versioned_docs/version-2.7.0/io-develop.md     | 240 ++++++
 .../version-2.7.0/io-dynamodb-source.md            |  76 ++
 .../version-2.7.0/io-elasticsearch-sink.md         | 140 +++
 .../versioned_docs/version-2.7.0/io-file-source.md | 138 +++
 .../versioned_docs/version-2.7.0/io-flume-sink.md  |  52 ++
 .../version-2.7.0/io-flume-source.md               |  52 ++
 .../versioned_docs/version-2.7.0/io-hbase-sink.md  |  64 ++
 .../versioned_docs/version-2.7.0/io-hdfs3-sink.md  |  54 ++
 .../version-2.7.0/io-influxdb-sink.md              | 108 +++
 .../versioned_docs/version-2.7.0/io-jdbc-sink.md   | 140 +++
 .../versioned_docs/version-2.7.0/io-kafka-sink.md  |  69 ++
 .../version-2.7.0/io-kafka-source.md               | 171 ++++
 .../version-2.7.0/io-kinesis-sink.md               |  73 ++
 .../version-2.7.0/io-kinesis-source.md             |  77 ++
 .../versioned_docs/version-2.7.0/io-mongo-sink.md  |  52 ++
 .../version-2.7.0/io-netty-source.md               | 205 +++++
 .../versioned_docs/version-2.7.0/io-overview.md    | 136 +++
 .../version-2.7.0/io-rabbitmq-sink.md              |  81 ++
 .../versioned_docs/version-2.7.0/io-redis-sink.md  |  70 ++
 .../versioned_docs/version-2.7.0/io-solr-sink.md   |  61 ++
 .../version-2.7.0/io-twitter-source.md             |  28 +
 .../versioned_docs/version-2.7.0/io-twitter.md     |   7 +
 .../version-2.7.0/performance-pulsar-perf.md       | 182 ++++
 .../version-2.7.0/reference-configuration.md       |   2 +-
 .../version-2.7.0/reference-connector-admin.md     |   7 +
 .../version-2.7.0/reference-pulsar-admin.md        |  12 +-
 .../schema-evolution-compatibility.md              | 953 +++++++++++++++++++++
 .../versioned_docs/version-2.7.0/schema-manage.md  | 809 +++++++++++++++++
 .../version-2.7.0/schema-understand.md             | 591 +++++++++++++
 .../version-2.7.0/security-athenz.md               |  93 ++
 .../version-2.7.0/security-encryption.md           | 180 ++++
 .../versioned_docs/version-2.7.0/security-jwt.md   | 264 ++++++
 .../version-2.7.0/security-kerberos.md             | 391 +++++++++
 .../version-2.7.0/security-overview.md             |  35 +
 .../version-2.7.0/security-tls-authentication.md   | 186 ++++
 .../version-2.7.0/security-tls-keystore.md         | 287 +++++++
 .../version-2.7.0/security-tls-transport.md        | 257 ++++++
 .../version-2.7.0/security-token-admin.md          | 159 ++++
 .../version-2.7.0/sql-getting-started.md           | 144 ++++
 .../versioned_docs/version-2.7.0/sql-overview.md   |  18 +
 .../versioned_docs/version-2.7.0/sql-rest-api.md   | 186 ++++
 .../version-2.7.0/window-functions-context.md      | 529 ++++++++++++
 94 files changed, 15537 insertions(+), 3 deletions(-)

diff --git a/site2/website/docusaurus-version.js b/site2/website/docusaurus-version.js
new file mode 100644
index 0000000..7ba6a03
--- /dev/null
+++ b/site2/website/docusaurus-version.js
@@ -0,0 +1,195 @@
+#!/usr/bin/env node
+
+/**
+ * Copyright (c) 2017-present, Facebook, Inc.
+ *
+ * This source code is licensed under the MIT license found in the
+ * LICENSE file in the root directory of this source tree.
+ */
+
+require('@babel/register')({
+    babelrc: false,
+    only: [__dirname, `${process.cwd()}/core`],
+    plugins: [
+      require('./server/translate-plugin.js'),
+      require('@babel/plugin-proposal-class-properties').default,
+      require('@babel/plugin-proposal-object-rest-spread').default,
+    ],
+    presets: [
+      require('@babel/preset-react').default,
+      require('@babel/preset-env').default,
+    ],
+  });
+  
+  const program = require('commander');
+  const chalk = require('chalk');
+  const glob = require('glob');
+  const fs = require('fs-extra');
+  const mkdirp = require('mkdirp');
+  const path = require('path');
+  
+  const readMetadata = require('./server/readMetadata.js');
+  const utils = require('./server/utils.js');
+  const versionFallback = require('./server/versionFallback.js');
+  const metadataUtils = require('./server/metadataUtils.js');
+  const env = require('./server/env.js');
+  
+  const CWD = process.cwd();
+  let versions;
+  if (fs.existsSync(`${CWD}/versions.json`)) {
+    versions = require(`${CWD}/versions.json`);
+  } else {
+    versions = [];
+  }
+  
+  let version;
+  
+  program
+    .arguments('<version>')
+    .action(ver => {
+      version = ver;
+    })
+    .parse(process.argv);
+  
+  if (env.versioning.missingVersionsPage) {
+    env.versioning.printMissingVersionsPageError();
+    process.exit(1);
+  }
+  
+  if (version.includes('/')) {
+    console.error(
+      `${chalk.red(
+        'Invalid version number specified! Do not include slash (/). Try something like: 1.0.0',
+      )}`,
+    );
+    process.exit(1);
+  }
+  
+  if (typeof version === 'undefined') {
+    console.error(
+      `${chalk.yellow(
+        'No version number specified!',
+      )}\nPass the version you wish to create as an argument.\nEx: 1.0.0`,
+    );
+    process.exit(1);
+  }
+  
+  if (versions.includes(version)) {
+    console.error(
+      `${chalk.yellow(
+        'This version already exists!',
+      )}\nSpecify a new version to create that does not already exist.`,
+    );
+    process.exit(1);
+  }
+  
+  function makeHeader(metadata) {
+    let header = '---\n';
+    Object.keys(metadata).forEach(key => {
+      header += `${key}: ${metadata[key]}\n`;
+    });
+    header += '---\n';
+    return header;
+  }
+  
+  function writeFileAndCreateFolder(file, content, encoding) {
+    mkdirp.sync(path.dirname(file));
+  
+    fs.writeFileSync(file, content, encoding);
+  }
+  
+  const versionFolder = `${CWD}/versioned_docs/version-${version}`;
+  
+  mkdirp.sync(versionFolder);
+  
+  // copy necessary files to new version, changing some of its metadata to reflect the versioning
+  const files = glob.sync(`${CWD}/../${readMetadata.getDocsPath()}/**`);
+  files.forEach(file => {
+    const ext = path.extname(file);
+    if (ext !== '.md' && ext !== '.markdown') {
+      return;
+    }
+  
+    const res = metadataUtils.extractMetadata(fs.readFileSync(file, 'utf8'));
+    const metadata = res.metadata;
+    // Don't version any docs without any metadata whatsoever.
+    if (Object.keys(metadata).length === 0) {
+      return;
+    }
+    const rawContent = res.rawContent;
+    if (!metadata.id) {
+      metadata.id = path.basename(file, path.extname(file));
+    }
+    if (metadata.id.includes('/')) {
+      throw new Error('Document id cannot include "/".');
+    }
+    if (!metadata.title) {
+      metadata.title = metadata.id;
+    }
+  
+    const docsDir = path.join(CWD, '../', readMetadata.getDocsPath());
+    const subDir = utils.getSubDir(file, docsDir);
+    const docId = subDir ? `${subDir}/${metadata.id}` : metadata.id;
+    // if (!versionFallback.diffLatestDoc(file, docId)) {
+    //   return;
+    // }
+    metadata.original_id = metadata.id;
+    metadata.id = `version-${version}-${metadata.id}`;
+    const targetFile = subDir
+      ? `${versionFolder}/${subDir}/${path.basename(file)}`
+      : `${versionFolder}/${path.basename(file)}`;
+  
+    writeFileAndCreateFolder(
+      targetFile,
+      makeHeader(metadata) + rawContent,
+      'utf8',
+    );
+  });
+  
+  // copy sidebar if necessary
+  if (versionFallback.diffLatestSidebar()) {
+    mkdirp(`${CWD}/versioned_sidebars`);
+    const sidebar = JSON.parse(fs.readFileSync(`${CWD}/sidebars.json`, 'utf8'));
+    const versioned = {};
+  
+    Object.keys(sidebar).forEach(sb => {
+      const versionSidebar = `version-${version}-${sb}`;
+      versioned[versionSidebar] = {};
+  
+      const categories = sidebar[sb];
+      Object.keys(categories).forEach(category => {
+        versioned[versionSidebar][category] = [];
+  
+        const categoryItems = categories[category];
+        categoryItems.forEach(categoryItem => {
+          let versionedCategoryItem = categoryItem;
+          if (typeof categoryItem === 'object') {
+            if (categoryItem.ids && categoryItem.ids.length > 0) {
+              versionedCategoryItem.ids = categoryItem.ids.map(
+                id => `version-${version}-${id}`,
+              );
+            }
+          } else if (typeof categoryItem === 'string') {
+            versionedCategoryItem = `version-${version}-${categoryItem}`;
+          }
+          versioned[versionSidebar][category].push(versionedCategoryItem);
+        });
+      });
+    });
+  
+    fs.writeFileSync(
+      `${CWD}/versioned_sidebars/version-${version}-sidebars.json`,
+      `${JSON.stringify(versioned, null, 2)}\n`,
+      'utf8',
+    );
+  }
+  
+  // update versions.json file
+  versions.unshift(version);
+  fs.writeFileSync(
+    `${CWD}/versions.json`,
+    `${JSON.stringify(versions, null, 2)}\n`,
+  );
+  
+  console.log(`${chalk.green(`Version ${version} created!\n`)}`);
+  
\ No newline at end of file
diff --git a/site2/website/package.json b/site2/website/package.json
index fbbd5f6..a8d08c3 100644
--- a/site2/website/package.json
+++ b/site2/website/package.json
@@ -5,7 +5,7 @@
     "build": "docusaurus-build",
     "publish-gh-pages": "docusaurus-publish",
     "write-translations": "docusaurus-write-translations",
-    "version": "docusaurus-version",
+    "version": "cp docusaurus-version.js ./node_modules/.bin/ && docusaurus-version",
     "rename-version": "docusaurus-rename-version",
     "test": "jest --detectOpenHandles",
     "crowdin-upload": "crowdin --config ../crowdin.yaml upload sources --auto-update -b master",
diff --git a/site2/website/versioned_docs/version-2.7.0/adaptors-kafka.md b/site2/website/versioned_docs/version-2.7.0/adaptors-kafka.md
new file mode 100644
index 0000000..b28a0f4
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/adaptors-kafka.md
@@ -0,0 +1,265 @@
+---
+id: version-2.7.0-adaptors-kafka
+title: Pulsar adaptor for Apache Kafka
+sidebar_label: Kafka client wrapper
+original_id: adaptors-kafka
+---
+
+
+Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API.
+
+## Using the Pulsar Kafka compatibility wrapper
+
+In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove the following dependency in `pom.xml`:
+
+```xml
+<dependency>
+  <groupId>org.apache.kafka</groupId>
+  <artifactId>kafka-clients</artifactId>
+  <version>0.10.2.1</version>
+</dependency>
+```
+
+Then include this dependency for the Pulsar Kafka wrapper:
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+With the new dependency, the existing code works without any changes. You need to adjust the configuration, and make sure it points the
+producers and consumers to Pulsar service rather than Kafka, and uses a particular
+Pulsar topic.
+
+## Using the Pulsar Kafka compatibility wrapper together with existing kafka client
+
+When migrating from Kafka to Pulsar, the application might use the original kafka client
+and the pulsar kafka wrapper together during migration. You should consider using the
+unshaded pulsar kafka client wrapper.
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka-original</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+When using this dependency, construct producers using `org.apache.kafka.clients.producer.PulsarKafkaProducer`
+instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers.
+
+## Producer example
+
+```java
+// Topic needs to be a regular Pulsar topic
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+
+props.put("key.serializer", IntegerSerializer.class.getName());
+props.put("value.serializer", StringSerializer.class.getName());
+
+Producer<Integer, String> producer = new KafkaProducer<>(props);
+
+for (int i = 0; i < 10; i++) {
+    producer.send(new ProducerRecord<Integer, String>(topic, i, "hello-" + i));
+    log.info("Message {} sent successfully", i);
+}
+
+producer.close();
+```
+
+## Consumer example
+
+```java
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+props.put("group.id", "my-subscription-name");
+props.put("enable.auto.commit", "false");
+props.put("key.deserializer", IntegerDeserializer.class.getName());
+props.put("value.deserializer", StringDeserializer.class.getName());
+
+Consumer<Integer, String> consumer = new KafkaConsumer<>(props);
+consumer.subscribe(Arrays.asList(topic));
+
+while (true) {
+    ConsumerRecords<Integer, String> records = consumer.poll(100);
+    records.forEach(record -> {
+        log.info("Received record: {}", record);
+    });
+
+    // Commit last offset
+    consumer.commitSync();
+}
+```
+
+## Complete Examples
+
+You can find the complete producer and consumer examples
+[here](https://github.com/apache/pulsar/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples).
+
+## Compatibility matrix
+
+Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API.
+
+#### Producer
+
+APIs:
+
+| Producer Method                                                               | Supported | Notes                                                                    |
+|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------|
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record)`                    | Yes       |                                                                          |
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback)` | Yes       |                                                                          |
+| `void flush()`                                                                | Yes       |                                                                          |
+| `List<PartitionInfo> partitionsFor(String topic)`                             | No        |                                                                          |
+| `Map<MetricName, ? extends Metric> metrics()`                                 | No        |                                                                          |
+| `void close()`                                                                | Yes       |                                                                          |
+| `void close(long timeout, TimeUnit unit)`                                     | Yes       |                                                                          |
+
+Properties:
+
+| Config property                         | Supported | Notes                                                                         |
+|:----------------------------------------|:----------|:------------------------------------------------------------------------------|
+| `acks`                                  | Ignored   | Durability and quorum writes are configured at the namespace level            |
+| `auto.offset.reset`                     | Yes       | Will have a default value of `latest` if user does not give specific setting. |
+| `batch.size`                            | Ignored   |                                                                               |
+| `bootstrap.servers`                     | Yes       |                                 |
+| `buffer.memory`                         | Ignored   |                                                                               |
+| `client.id`                             | Ignored   |                                                                               |
+| `compression.type`                      | Yes       | Allows `gzip` and `lz4`. No `snappy`.                                         |
+| `connections.max.idle.ms`               | Yes       | Only support up to 2,147,483,647,000(Integer.MAX_VALUE * 1000) ms of idle time|
+| `interceptor.classes`                   | Yes       |                                                                               |
+| `key.serializer`                        | Yes       |                                                                               |
+| `linger.ms`                             | Yes       | Controls the group commit time when batching messages                         |
+| `max.block.ms`                          | Ignored   |                                                                               |
+| `max.in.flight.requests.per.connection` | Ignored   | In Pulsar ordering is maintained even with multiple requests in flight        |
+| `max.request.size`                      | Ignored   |                                                                               |
+| `metric.reporters`                      | Ignored   |                                                                               |
+| `metrics.num.samples`                   | Ignored   |                                                                               |
+| `metrics.sample.window.ms`              | Ignored   |                                                                               |
+| `partitioner.class`                     | Yes       |                                                                               |
+| `receive.buffer.bytes`                  | Ignored   |                                                                               |
+| `reconnect.backoff.ms`                  | Ignored   |                                                                               |
+| `request.timeout.ms`                    | Ignored   |                                                                               |
+| `retries`                               | Ignored   | Pulsar client retries with exponential backoff until the send timeout expires. |
+| `send.buffer.bytes`                     | Ignored   |                                                                               |
+| `timeout.ms`                            | Yes       |                                                                               |
+| `value.serializer`                      | Yes       |                                                                               |
+
+
+#### Consumer
+
+The following table lists consumer APIs.
+
+| Consumer Method                                                                                         | Supported | Notes |
+|:--------------------------------------------------------------------------------------------------------|:----------|:------|
+| `Set<TopicPartition> assignment()`                                                                      | No        |       |
+| `Set<String> subscription()`                                                                            | Yes       |       |
+| `void subscribe(Collection<String> topics)`                                                             | Yes       |       |
+| `void subscribe(Collection<String> topics, ConsumerRebalanceListener callback)`                         | No        |       |
+| `void assign(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)`                                   | No        |       |
+| `void unsubscribe()`                                                                                    | Yes       |       |
+| `ConsumerRecords<K, V> poll(long timeoutMillis)`                                                        | Yes       |       |
+| `void commitSync()`                                                                                     | Yes       |       |
+| `void commitSync(Map<TopicPartition, OffsetAndMetadata> offsets)`                                       | Yes       |       |
+| `void commitAsync()`                                                                                    | Yes       |       |
+| `void commitAsync(OffsetCommitCallback callback)`                                                       | Yes       |       |
+| `void commitAsync(Map<TopicPartition, OffsetAndMetadata> offsets, OffsetCommitCallback callback)`       | Yes       |       |
+| `void seek(TopicPartition partition, long offset)`                                                      | Yes       |       |
+| `void seekToBeginning(Collection<TopicPartition> partitions)`                                           | Yes       |       |
+| `void seekToEnd(Collection<TopicPartition> partitions)`                                                 | Yes       |       |
+| `long position(TopicPartition partition)`                                                               | Yes       |       |
+| `OffsetAndMetadata committed(TopicPartition partition)`                                                 | Yes       |       |
+| `Map<MetricName, ? extends Metric> metrics()`                                                           | No        |       |
+| `List<PartitionInfo> partitionsFor(String topic)`                                                       | No        |       |
+| `Map<String, List<PartitionInfo>> listTopics()`                                                         | No        |       |
+| `Set<TopicPartition> paused()`                                                                          | No        |       |
+| `void pause(Collection<TopicPartition> partitions)`                                                     | No        |       |
+| `void resume(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes(Map<TopicPartition, Long> timestampsToSearch)` | No        |       |
+| `Map<TopicPartition, Long> beginningOffsets(Collection<TopicPartition> partitions)`                     | No        |       |
+| `Map<TopicPartition, Long> endOffsets(Collection<TopicPartition> partitions)`                           | No        |       |
+| `void close()`                                                                                          | Yes       |       |
+| `void close(long timeout, TimeUnit unit)`                                                               | Yes       |       |
+| `void wakeup()`                                                                                         | No        |       |
+
+Properties:
+
+| Config property                 | Supported | Notes                                                 |
+|:--------------------------------|:----------|:------------------------------------------------------|
+| `group.id`                      | Yes       | Maps to a Pulsar subscription name                    |
+| `max.poll.records`              | Yes       |                                                       |
+| `max.poll.interval.ms`          | Ignored   | Messages are "pushed" from broker                     |
+| `session.timeout.ms`            | Ignored   |                                                       |
+| `heartbeat.interval.ms`         | Ignored   |                                                       |
+| `bootstrap.servers`             | Yes       | Needs to point to a single Pulsar service URL         |
+| `enable.auto.commit`            | Yes       |                                                       |
+| `auto.commit.interval.ms`       | Ignored   | With auto-commit, acks are sent immediately to broker |
+| `partition.assignment.strategy` | Ignored   |                                                       |
+| `auto.offset.reset`             | Yes       | Only support earliest and latest.                     |
+| `fetch.min.bytes`               | Ignored   |                                                       |
+| `fetch.max.bytes`               | Ignored   |                                                       |
+| `fetch.max.wait.ms`             | Ignored   |                                                       |
+| `interceptor.classes`           | Yes       |                                                       |
+| `metadata.max.age.ms`           | Ignored   |                                                       |
+| `max.partition.fetch.bytes`     | Ignored   |                                                       |
+| `send.buffer.bytes`             | Ignored   |                                                       |
+| `receive.buffer.bytes`          | Ignored   |                                                       |
+| `client.id`                     | Ignored   |                                                       |
+
+
+## Customize Pulsar configurations
+
+You can configure Pulsar authentication provider directly from the Kafka properties.
+
+### Pulsar client properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-)          |         | Configure to auth provider. For example, `org.apache.pulsar.client.impl.auth.AuthenticationTls`.|
+| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-)          |         | Map which represents parameters for the Authentication-Plugin. |
+| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-)          |         | String which represents parameters for the Authentication-Plugin, for example, `key1:val1,key2:val2`. |
+| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-)                       | `false` | Enable TLS transport encryption.                                                        |
+| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-)   |         | Path for the TLS trust certificate store.                                               |
+| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers.                                           |
+| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout. |
+| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval. |
+| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | The number of Netty IO threads to use. |
+| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | The maximum number of connection to each broker. |
+| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay. |
+| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | The maximum number of concurrent topic lookups. |
+| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | The threshold of errors to forcefully close a connection. |
+| [`pulsar.keepalive.interval.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientBuilder.html#keepAliveInterval-int-java.util.concurrent.TimeUnit-)| `30000` | Keep alive interval for each client-broker-connection.  |
+
+
+### Pulsar producer properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify the producer name. |
+| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) |  | Specify baseline for sequence ID of this producer. |
+| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the maximum size of the message queue pending to receive an acknowledgment from the broker.  |
+| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the maximum number of pending messages across all the partitions.  |
+| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer. |
+| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages in a batch. |
+| [`pulsar.block.if.producer.queue.full`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBlockIfQueueFull-boolean-) | | Specify the block producer if queue  is full. |
+
+
+### Pulsar consumer Properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Specify the consumer name. |
+| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Set the size of the consumer receiver queue. |
+| [`pulsar.consumer.acknowledgments.group.time.millis`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#acknowledgmentGroupTime-long-java.util.concurrent.TimeUnit-) | 100 | Set the maximum amount of group time for consumers to send the acknowledgments to the broker. |
+| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the maximum size of the total receiver queue across partitions. |
+| [`pulsar.consumer.subscription.topics.mode`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionTopicsMode-Mode-) | PersistentOnly | Set the subscription topic mode for consumers. |
diff --git a/site2/website/versioned_docs/version-2.7.0/adaptors-spark.md b/site2/website/versioned_docs/version-2.7.0/adaptors-spark.md
new file mode 100644
index 0000000..ade3ab4
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/adaptors-spark.md
@@ -0,0 +1,77 @@
+---
+id: version-2.7.0-adaptors-spark
+title: Pulsar adaptor for Apache Spark
+sidebar_label: Apache Spark
+original_id: adaptors-spark
+---
+
+The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive data from Pulsar.
+
+An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming Pulsar receiver and can process it in a variety of ways.
+
+## Prerequisites
+
+To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration.
+
+### Maven
+
+If you're using Maven, add this to your `pom.xml`:
+
+```xml
+<!-- in your <properties> block -->
+<pulsar.version>{{pulsar:version}}</pulsar.version>
+
+<!-- in your <dependencies> block -->
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-spark</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you're using Gradle, add this to your `build.gradle` file:
+
+```groovy
+def pulsarVersion = "{{pulsar:version}}"
+
+dependencies {
+    compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion
+}
+```
+
+## Usage
+
+Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`:
+
+```java
+    String serviceUrl = "pulsar://localhost:6650/";
+    String topic = "persistent://public/default/test_src";
+    String subs = "test_sub";
+
+    SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("Pulsar Spark Example");
+
+    JavaStreamingContext jsc = new JavaStreamingContext(sparkConf, Durations.seconds(60));
+
+    ConsumerConfigurationData<byte[]> pulsarConf = new ConsumerConfigurationData();
+
+    Set<String> set = new HashSet<>();
+    set.add(topic);
+    pulsarConf.setTopicNames(set);
+    pulsarConf.setSubscriptionName(subs);
+
+    SparkStreamingPulsarReceiver pulsarReceiver = new SparkStreamingPulsarReceiver(
+        serviceUrl,
+        pulsarConf,
+        new AuthenticationDisabled());
+
+    JavaReceiverInputDStream<byte[]> lineDStream = jsc.receiverStream(pulsarReceiver);
+```
+
+
+## Example
+
+You can find a complete example [here](https://github.com/apache/pulsar/tree/master/examples/spark/src/main/java/org/apache/spark/streaming/receiver/example/SparkStreamingPulsarReceiverExample.java).
+In this example, the number of messages which contain the string "Pulsar" in received messages is counted.
+
diff --git a/site2/website/versioned_docs/version-2.7.0/adaptors-storm.md b/site2/website/versioned_docs/version-2.7.0/adaptors-storm.md
new file mode 100644
index 0000000..3aecc57
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/adaptors-storm.md
@@ -0,0 +1,91 @@
+---
+id: version-2.7.0-adaptors-storm
+title: Pulsar adaptor for Apache Storm
+sidebar_label: Apache Storm
+original_id: adaptors-storm
+---
+
+Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data.
+
+An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt.
+
+## Using the Pulsar Storm Adaptor
+
+Include dependency for Pulsar Storm Adaptor:
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-storm</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+## Pulsar Spout
+
+The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client.
+
+The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout:
+
+```java
+MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() {
+
+    @Override
+    public Values toValues(Message msg) {
+        return new Values(new String(msg.getData()));
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        // declare the output fields
+        declarer.declare(new Fields("string"));
+    }
+};
+
+// Configure a Pulsar Spout
+PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration();
+spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1");
+spoutConf.setSubscriptionName("my-subscriber-name1");
+spoutConf.setMessageToValuesMapper(messageToValuesMapper);
+
+// Create a Pulsar Spout
+PulsarSpout spout = new PulsarSpout(spoutConf);
+```
+
+## Pulsar Bolt
+
+The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client.
+
+A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt:
+
+```java
+TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() {
+
+    @Override
+    public TypedMessageBuilder<byte[]> toMessage(TypedMessageBuilder<byte[]> msgBuilder, Tuple tuple) {
+        String receivedMessage = tuple.getString(0);
+        // message processing
+        String processedMsg = receivedMessage + "-processed";
+        return msgBuilder.value(processedMsg.getBytes());
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        // declare the output fields
+    }
+};
+
+// Configure a Pulsar Bolt
+PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration();
+boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2");
+boltConf.setTupleToMessageMapper(tupleToMessageMapper);
+
+// Create a Pulsar Bolt
+PulsarBolt bolt = new PulsarBolt(boltConf);
+```
+
+## Example
+
+You can find a complete example [here](https://github.com/apache/pulsar/tree/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/example/StormExample.java).
diff --git a/site2/website/versioned_docs/version-2.7.0/admin-api-namespaces.md b/site2/website/versioned_docs/version-2.7.0/admin-api-namespaces.md
index b442adf..c532e6b 100644
--- a/site2/website/versioned_docs/version-2.7.0/admin-api-namespaces.md
+++ b/site2/website/versioned_docs/version-2.7.0/admin-api-namespaces.md
@@ -437,6 +437,34 @@ admin.namespaces().getNamespaceMessageTTL(namespace)
 ```
 <!--END_DOCUSAURUS_CODE_TABS-->
 
+#### Remove message-ttl
+
+Remove a message TTL of the configured namespace.
+
+<!--DOCUSAURUS_CODE_TABS-->
+<!--pulsar-admin-->
+
+```
+$ pulsar-admin namespaces remove-message-ttl test-tenant/ns1
+```
+
+```
+100
+```
+
+<!--REST API-->
+
+```
+{@inject: endpoint|DELETE|/admin/v2/namespaces/{tenant}/{namespace}/messageTTL|operation/removeNamespaceMessageTTL}
+```
+
+<!--Java-->
+
+```java
+admin.namespaces().removeNamespaceMessageTTL(namespace)
+```
+<!--END_DOCUSAURUS_CODE_TABS-->
+
 #### Split bundle
 
 Each namespace bundle can contain multiple topics and each bundle can be served by only one broker. 
diff --git a/site2/website/versioned_docs/version-2.7.0/admin-api-schemas.md b/site2/website/versioned_docs/version-2.7.0/admin-api-schemas.md
new file mode 100644
index 0000000..1eb150d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/admin-api-schemas.md
@@ -0,0 +1,7 @@
+---
+id: version-2.7.0-admin-api-schemas
+title: Managing Schemas
+sidebar_label: Schemas
+original_id: admin-api-schemas
+---
+
diff --git a/site2/website/versioned_docs/version-2.7.0/administration-dashboard.md b/site2/website/versioned_docs/version-2.7.0/administration-dashboard.md
new file mode 100644
index 0000000..30ff5aa
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/administration-dashboard.md
@@ -0,0 +1,63 @@
+---
+id: version-2.7.0-administration-dashboard
+title: Pulsar dashboard
+sidebar_label: Dashboard
+original_id: administration-dashboard
+---
+
+> Note   
+> Pulsar dashboard is deprecated. If you want to manage and monitor the stats of your topics, use [Pulsar Manager](administration-pulsar-manager.md). 
+
+Pulsar dashboard is a web application that enables users to monitor current stats for all [topics](reference-terminology.md#topic) in tabular form.
+
+The dashboard is a data collector that polls stats from all the brokers in a Pulsar instance (across multiple clusters) and stores all the information in a [PostgreSQL](https://www.postgresql.org/) database.
+
+You can use the [Django](https://www.djangoproject.com) web app to render the collected data.
+
+## Install
+
+The easiest way to use the dashboard is to run it inside a [Docker](https://www.docker.com/products/docker) container.
+
+```shell
+$ SERVICE_URL=http://broker.example.com:8080/
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  apachepulsar/pulsar-dashboard:{{pulsar:version}}
+```
+
+You can find the {@inject: github:`Dockerfile`:/dashboard/Dockerfile} in the `dashboard` directory and build an image from scratch as well:
+
+```shell
+$ docker build -t apachepulsar/pulsar-dashboard dashboard
+```
+
+If token authentication is enabled:
+> Provided token should have super-user access. 
+```shell
+$ SERVICE_URL=http://broker.example.com:8080/
+$ JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
+$ docker run -p 80:80 \
+  -e SERVICE_URL=$SERVICE_URL \
+  -e JWT_TOKEN=$JWT_TOKEN \
+  apachepulsar/pulsar-dashboard
+```
+ 
+You need to specify only one service URL for a Pulsar cluster. Internally, the collector figures out all the existing clusters and the brokers from where it needs to pull the metrics. If you connect the dashboard to Pulsar running in standalone mode, the URL is `http://<broker-ip>:8080` by default. `<broker-ip>` is the ip address or hostname of the machine running Pulsar standalone. The ip address or hostname should be accessible from the docker instance running dashboard.
+
+Once the Docker container runs, the web dashboard is accessible via `localhost` or whichever host that Docker uses.
+
+> The `SERVICE_URL` that the dashboard uses needs to be reachable from inside the Docker container
+
+If the Pulsar service runs in standalone mode in `localhost`, the `SERVICE_URL` has to
+be the IP of the machine.
+
+Similarly, given the Pulsar standalone advertises itself with localhost by default, you need to
+explicitely set the advertise address to the host IP. For example:
+
+```shell
+$ bin/pulsar standalone --advertised-address 1.2.3.4
+```
+
+### Known issues
+
+Currently, only Pulsar Token [authentication](security-overview.md#authentication-providers) is supported.
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/administration-geo.md b/site2/website/versioned_docs/version-2.7.0/administration-geo.md
new file mode 100644
index 0000000..f4ec1d4
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/administration-geo.md
@@ -0,0 +1,157 @@
+---
+id: version-2.7.0-administration-geo
+title: Pulsar geo-replication
+sidebar_label: Geo-replication
+original_id: administration-geo
+---
+
+*Geo-replication* is the replication of persistently stored message data across multiple clusters of a Pulsar instance.
+
+## How geo-replication works
+
+The diagram below illustrates the process of geo-replication across Pulsar clusters:
+
+![Replication Diagram](assets/geo-replication.png)
+
+In this diagram, whenever **P1**, **P2**, and **P3** producers publish messages to the **T1** topic on **Cluster-A**, **Cluster-B**, and **Cluster-C** clusters respectively, those messages are instantly replicated across clusters. Once the messages are replicated, **C1** and **C2** consumers can consume those messages from their respective clusters.
+
+Without geo-replication, **C1** and **C2** consumers are not able to consume messages that **P3** producer publishes.
+
+## Geo-replication and Pulsar properties
+
+You must enable geo-replication on a per-tenant basis in Pulsar. You can enable geo-replication between clusters only when a tenant is created that allows access to both clusters.
+
+Although geo-replication must be enabled between two clusters, actually geo-replication is managed at the namespace level. You must complete the following tasks to enable geo-replication for a namespace:
+
+* [Enable geo-replication namespaces](#enable-geo-replication-namespaces)
+* Configure that namespace to replicate across two or more provisioned clusters
+
+Any message published on *any* topic in that namespace is replicated to all clusters in the specified set.
+
+## Local persistence and forwarding
+
+When messages are produced on a Pulsar topic, messages are first persisted in the local cluster, and then forwarded asynchronously to the remote clusters.
+
+In normal cases, when connectivity issues are none, messages are replicated immediately, at the same time as they are dispatched to local consumers. Typically, the network [round-trip time](https://en.wikipedia.org/wiki/Round-trip_delay_time) (RTT) between the remote regions defines end-to-end delivery latency.
+
+Applications can create producers and consumers in any of the clusters, even when the remote clusters are not reachable (like during a network partition).
+
+Producers and consumers can publish messages to and consume messages from any cluster in a Pulsar instance. However, subscriptions cannot only be local to the cluster where the subscriptions are created but also can be transferred between clusters after replicated subscription is enabled. Once replicated subscription is enabled, you can keep subscription state in synchronization. Therefore, a topic can be asynchronously replicated across multiple geographical regions. In case of failover [...]
+
+In the aforementioned example, the **T1** topic is replicated among three clusters, **Cluster-A**, **Cluster-B**, and **Cluster-C**.
+
+All messages produced in any of the three clusters are delivered to all subscriptions in other clusters. In this case, **C1** and **C2** consumers receive all messages that **P1**, **P2**, and **P3** producers publish. Ordering is still guaranteed on a per-producer basis.
+
+## Configure replication
+
+As stated in [Geo-replication and Pulsar properties](#geo-replication-and-pulsar-properties) section, geo-replication in Pulsar is managed at the [tenant](reference-terminology.md#tenant) level.
+
+### Grant permissions to properties
+
+To replicate to a cluster, the tenant needs permission to use that cluster. You can grant permission to the tenant when you create the tenant or grant later.
+
+Specify all the intended clusters when you create a tenant:
+
+```shell
+$ bin/pulsar-admin tenants create my-tenant \
+  --admin-roles my-admin-role \
+  --allowed-clusters us-west,us-east,us-cent
+```
+
+To update permissions of an existing tenant, use `update` instead of `create`.
+
+### Enable geo-replication namespaces
+
+You can create a namespace with the following command sample.
+
+```shell
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace
+```
+
+Initially, the namespace is not assigned to any cluster. You can assign the namespace to clusters using the `set-clusters` subcommand:
+
+```shell
+$ bin/pulsar-admin namespaces set-clusters my-tenant/my-namespace \
+  --clusters us-west,us-east,us-cent
+```
+
+You can change the replication clusters for a namespace at any time, without disruption to ongoing traffic. Replication channels are immediately set up or stopped in all clusters as soon as the configuration changes.
+
+### Use topics with geo-replication
+
+Once you create a geo-replication namespace, any topics that producers or consumers create within that namespace is replicated across clusters. Typically, each application uses the `serviceUrl` for the local cluster.
+
+#### Selective replication
+
+By default, messages are replicated to all clusters configured for the namespace. You can restrict replication selectively by specifying a replication list for a message, and then that message is replicated only to the subset in the replication list.
+
+The following is an example for the [Java API](client-libraries-java.md). Note the use of the `setReplicationClusters` method when you construct the {@inject: javadoc:Message:/client/org/apache/pulsar/client/api/Message} object:
+
+```java
+List<String> restrictReplicationTo = Arrays.asList(
+        "us-west",
+        "us-east"
+);
+
+Producer producer = client.newProducer()
+        .topic("some-topic")
+        .create();
+
+producer.newMessage()
+        .value("my-payload".getBytes())
+        .setReplicationClusters(restrictReplicationTo)
+        .send();
+```
+
+#### Topic stats
+
+Topic-specific statistics for geo-replication topics are available via the [`pulsar-admin`](reference-pulsar-admin.md) tool and {@inject: rest:REST:/} API:
+
+```shell
+$ bin/pulsar-admin persistent stats persistent://my-tenant/my-namespace/my-topic
+```
+
+Each cluster reports its own local stats, including the incoming and outgoing replication rates and backlogs.
+
+#### Delete a geo-replication topic
+
+Given that geo-replication topics exist in multiple regions, directly deleting a geo-replication topic is not possible. Instead, you should rely on automatic topic garbage collection.
+
+In Pulsar, a topic is automatically deleted when the topic meets the following three conditions:
+- no producers or consumers are connected to it;
+- no subscriptions to it;
+- no more messages are kept for retention. 
+For geo-replication topics, each region uses a fault-tolerant mechanism to decide when deleting the topic locally is safe.
+
+You can explicitly disable topic garbage collection by setting `brokerDeleteInactiveTopicsEnabled` to `false` in your [broker configuration](reference-configuration.md#broker).
+
+To delete a geo-replication topic, close all producers and consumers on the topic, and delete all of its local subscriptions in every replication cluster. When Pulsar determines that no valid subscription for the topic remains across the system, it will garbage collect the topic.
+
+## Replicated subscriptions
+
+Pulsar supports replicated subscriptions, so you can keep subscription state in sync, within a sub-second timeframe, in the context of a topic that is being asynchronously replicated across multiple geographical regions.
+
+In case of failover, a consumer can restart consuming from the failure point in a different cluster. 
+
+### Enable replicated subscription
+
+Replicated subscription is disabled by default. You can enable replicated subscription when creating a consumer. 
+
+```java
+Consumer<String> consumer = client.newConsumer(Schema.STRING)
+            .topic("my-topic")
+            .subscriptionName("my-subscription")
+            .replicateSubscriptionState(true)
+            .subscribe();
+```
+
+### Advantages
+
+ * It is easy to implement the logic. 
+ * You can choose to enable or disable replicated subscription.
+ * When you enable it, the overhead is low, and it is easy to configure. 
+ * When you disable it, the overhead is zero.
+
+### Limitations
+
+When you enable replicated subscription, you're creating a consistent distributed snapshot to establish an association between message ids from different clusters. The snapshots are taken periodically. The default value is `1 second`. It means that a consumer failing over to a different cluster can potentially receive 1 second of duplicates. You can also configure the frequency of the snapshot in the `broker.conf` file.
diff --git a/site2/website/versioned_docs/version-2.7.0/administration-load-balance.md b/site2/website/versioned_docs/version-2.7.0/administration-load-balance.md
new file mode 100644
index 0000000..c9b530b
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/administration-load-balance.md
@@ -0,0 +1,182 @@
+---
+id: version-2.7.0-administration-load-balance
+title: Pulsar load balance
+sidebar_label: Load balance
+original_id: administration-load-balance
+---
+
+## Load balance across Pulsar brokers
+
+Pulsar is an horizontally scalable messaging system, so the traffic
+in a logical cluster must be spread across all the available Pulsar brokers as evenly as possible, which is a core requirement.
+
+You can use multiple settings and tools to control the traffic distribution which require a bit of context to understand how the traffic is managed in Pulsar. Though, in most cases, the core requirement mentioned above is true out of the box and you should not worry about it. 
+
+## Pulsar load manager architecture
+
+The following part introduces the basic architecture of the Pulsar load manager.
+
+### Assign topics to brokers dynamically
+
+Topics are dynamically assigned to brokers based on the load conditions of all brokers in the cluster.
+
+When a client starts using new topics that are not assigned to any broker, a process is triggered to choose the best suited broker to acquire ownership of these topics according to the load conditions. 
+
+In case of partitioned topics, different partitions are assigned to different brokers. Here "topic" means either a non-partitioned topic or one partition of a topic.
+
+The assignment is "dynamic" because the assignment changes quickly. For example, if the broker owning the topic crashes, the topic is reassigned immediately to another broker. Another scenario is that the broker owning the topic becomes overloaded. In this case, the topic is reassigned to a less loaded broker.
+
+The stateless nature of brokers makes the dynamic assignment possible, so you can quickly expand or shrink the cluster based on usage.
+
+#### Assignment granularity
+
+The assignment of topics or partitions to brokers is not done at the topics or partitions level, but done at the Bundle level (a higher level). The reason is to amortize the amount of information that you need to keep track. Based on CPU, memory, traffic load and other indexes, topics are assigned to a particular broker dynamically. 
+
+Instead of individual topic or partition assignment, each broker takes ownership of a subset of the topics for a namespace. This subset is called a "*bundle*" and effectively this subset is a sharding mechanism.
+
+The namespace is the "administrative" unit: many config knobs or operations are done at the namespace level.
+
+For assignment, a namespaces is sharded into a list of "bundles", with each bundle comprising
+a portion of overall hash range of the namespace.
+
+Topics are assigned to a particular bundle by taking the hash of the topic name and checking in which
+bundle the hash falls into.
+
+Each bundle is independent of the others and thus is independently assigned to different brokers.
+
+### Create namespaces and bundles
+
+When you create a new namespace, the new namespace sets to use the default number of bundles. You can set this in `conf/broker.conf`:
+
+```properties
+# When a namespace is created without specifying the number of bundle, this
+# value will be used as the default
+defaultNumberOfNamespaceBundles=4
+```
+
+You can either change the system default, or override it when you create a new namespace:
+
+```shell
+$ bin/pulsar-admin namespaces create my-tenant/my-namespace --clusters us-west --bundles 16
+```
+
+With this command, you create a namespace with 16 initial bundles. Therefore the topics for this namespaces can immediately be spread across up to 16 brokers.
+
+In general, if you know the expected traffic and number of topics in advance, you had better start with a reasonable number of bundles instead of waiting for the system to auto-correct the distribution.
+
+On the same note, it is beneficial to start with more bundles than the number of brokers, because of the hashing nature of the distribution of topics into bundles. For example, for a namespace with 1000 topics, using something like 64 bundles achieves a good distribution of traffic across 16 brokers.
+
+### Unload topics and bundles
+
+You can "unload" a topic in Pulsar with admin operation. Unloading means to close the topics,
+release ownership and reassign the topics to a new broker, based on current load.
+
+When unloading happens, the client experiences a small latency blip, typically in the order of tens of milliseconds, while the topic is reassigned.
+
+Unloading is the mechanism that the load-manager uses to perform the load shedding, but you can also trigger the unloading manually, for example to correct the assignments and redistribute traffic even before having any broker overloaded.
+
+Unloading a topic has no effect on the assignment, but just closes and reopens the particular topic:
+
+```shell
+pulsar-admin topics unload persistent://tenant/namespace/topic
+```
+
+To unload all topics for a namespace and trigger reassignments:
+
+```shell
+pulsar-admin namespaces unload tenant/namespace
+```
+
+### Split namespace bundles 
+
+Since the load for the topics in a bundle might change over time, or predicting upfront might just be hard, brokers can split bundles into two. The new smaller bundles can be reassigned to different brokers.
+
+The splitting happens based on some tunable thresholds. Any existing bundle that exceeds any of the threshold is a candidate to be split. By default the newly split bundles are also immediately offloaded to other brokers, to facilitate the traffic distribution.
+
+```properties
+# enable/disable namespace bundle auto split
+loadBalancerAutoBundleSplitEnabled=true
+
+# enable/disable automatic unloading of split bundles
+loadBalancerAutoUnloadSplitBundlesEnabled=true
+
+# maximum topics in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxTopics=1000
+
+# maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxSessions=1000
+
+# maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxMsgRate=30000
+
+# maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered
+loadBalancerNamespaceBundleMaxBandwidthMbytes=100
+
+# maximum number of bundles in a namespace (for auto-split)
+loadBalancerNamespaceMaximumBundles=128
+```
+
+### Shed load automatically
+
+The support for automatic load shedding is avaliable in the load manager of Pulsar. This means that whenever the system recognizes a particular broker is overloaded, the system forces some traffic to be reassigned to less loaded brokers.
+
+When a broker is identified as overloaded, the broker forces to "unload" a subset of the bundles, the
+ones with higher traffic, that make up for the overload percentage.
+
+For example, the default threshold is 85% and if a broker is over quota at 95% CPU usage, then the broker unloads the percent difference plus a 5% margin: `(95% - 85%) + 5% = 15%`.
+
+Given the selection of bundles to offload is based on traffic (as a proxy measure for cpu, network
+and memory), broker unloads bundles for at least 15% of traffic.
+
+The automatic load shedding is enabled by default and you can disable the automatic load shedding with this setting:
+
+```properties
+# Enable/disable automatic bundle unloading for load-shedding
+loadBalancerSheddingEnabled=true
+```
+
+Additional settings that apply to shedding:
+
+```properties
+# Load shedding interval. Broker periodically checks whether some traffic should be offload from
+# some over-loaded broker to other under-loaded brokers
+loadBalancerSheddingIntervalMinutes=1
+
+# Prevent the same topics to be shed and moved to other brokers more that once within this timeframe
+loadBalancerSheddingGracePeriodMinutes=30
+```
+
+#### Broker overload thresholds
+
+The determinations of when a broker is overloaded is based on threshold of CPU, network and memory usage. Whenever either of those metrics reaches the threshold, the system triggers the shedding (if enabled).
+
+By default, overload threshold is set at 85%:
+
+```properties
+# Usage threshold to determine a broker as over-loaded
+loadBalancerBrokerOverloadedThresholdPercentage=85
+```
+
+Pulsar gathers the usage stats from the system metrics.
+
+In case of network utilization, in some cases the network interface speed that Linux reports is
+not correct and needs to be manually overridden. This is the case in AWS EC2 instances with 1Gbps
+NIC speed for which the OS reports 10Gbps speed.
+
+Because of the incorrect max speed, the Pulsar load manager might think the broker has not reached the NIC capacity, while in fact the broker already uses all the bandwidth and the traffic is slowed down.
+
+You can use the following setting to correct the max NIC speed:
+
+```properties
+# Override the auto-detection of the network interfaces max speed.
+# This option is useful in some environments (eg: EC2 VMs) where the max speed
+# reported by Linux is not reflecting the real bandwidth available to the broker.
+# Since the network usage is employed by the load manager to decide when a broker
+# is overloaded, it is important to make sure the info is correct or override it
+# with the right value here. The configured value can be a double (eg: 0.8) and that
+# can be used to trigger load-shedding even before hitting on NIC limits.
+loadBalancerOverrideBrokerNicSpeedGbps=
+```
+
+When the value is empty, Pulsar uses the value that the OS reports.
+
diff --git a/site2/website/versioned_docs/version-2.7.0/administration-stats.md b/site2/website/versioned_docs/version-2.7.0/administration-stats.md
new file mode 100644
index 0000000..927272f
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/administration-stats.md
@@ -0,0 +1,64 @@
+---
+id: version-2.7.0-administration-stats
+title: Pulsar stats
+sidebar_label: Pulsar statistics
+original_id: administration-stats
+---
+
+## Partitioned topics
+
+|Stat|Description|
+|---|---|
+|msgRateIn| The sum of publish rates of all local and replication publishers in messages per second.|
+|msgThroughputIn| Same as msgRateIn but in bytes per second instead of messages per second.|
+|msgRateOut| The sum of dispatch rates of all local and replication consumers in messages per second.|
+|msgThroughputOut| Same as msgRateOut but in bytes per second instead of messages per second.|
+|averageMsgSize| Average message size, in bytes, from this publisher within the last interval.|
+|storageSize| The sum of storage size of the ledgers for this topic.|
+|publishers| The list of all local publishers into the topic. Publishers can be anywhere from zero to thousands.|
+|producerId| Internal identifier for this producer on this topic.|
+|producerName|  Internal identifier for this producer, generated by the client library.|
+|address| IP address and source port for the connection of this producer.|
+|connectedSince| Timestamp this producer is created or last reconnected.|
+|subscriptions| The list of all local subscriptions to the topic.|
+|my-subscription| The name of this subscription (client defined).|
+|msgBacklog| The count of messages in backlog for this subscription.|
+|type| This subscription type.|
+|msgRateExpired| The rate at which messages are discarded instead of dispatched from this subscription due to TTL.|
+|consumers| The list of connected consumers for this subscription.|
+|consumerName| Internal identifier for this consumer, generated by the client library.|
+|availablePermits| The number of messages this consumer has space for in the listen queue of client library. A value of 0 means the queue of client library is full and receive() is not being called. A nonzero value means this consumer is ready to be dispatched messages.|
+|replication| This section gives the stats for cross-colo replication of this topic.|
+|replicationBacklog| The outbound replication backlog in messages.|
+|connected| Whether the outbound replicator is connected.|
+|replicationDelayInSeconds| How long the oldest message has been waiting to be sent through the connection, if connected is true.|
+|inboundConnection| The IP and port of the broker in the publisher connection of remote cluster to this broker. |
+|inboundConnectedSince| The TCP connection being used to publish messages to the remote cluster. If no local publishers are connected, this connection is automatically closed after a minute.|
+
+
+## Topics
+
+|Stat|Description|
+|---|---|
+|entriesAddedCounter| Messages published since this broker loads this topic.|
+|numberOfEntries| Total number of messages being tracked.|
+|totalSize| Total storage size in bytes of all messages.|
+|currentLedgerEntries| Count of messages written to the ledger currently open for writing.|
+|currentLedgerSize| Size in bytes of messages written to ledger currently open for writing.|
+|lastLedgerCreatedTimestamp| Time when last ledger is created.|
+|lastLedgerCreationFailureTimestamp| Time when last ledger is failed.|
+|waitingCursorsCount| How many cursors are caught up and waiting for a new message to be published.|
+|pendingAddEntriesCount| How many messages have (asynchronous) write requests you are waiting on completion.|
+|lastConfirmedEntry| The ledgerid:entryid of the last message successfully written. If the entryid is -1, then the ledger is opened or is being currently opened but has no entries written yet.|
+|state| The state of the cursor ledger. Open means you have a cursor ledger for saving updates of the markDeletePosition.|
+|ledgers| The ordered list of all ledgers for this topic holding its messages.|
+|cursors| The list of all cursors on this topic. Every subscription you saw in the topic stats has one.|
+|markDeletePosition| The ack position: the last message the subscriber acknowledges receiving.|
+|readPosition| The latest position of subscriber for reading message.|
+|waitingReadOp| This is true when the subscription reads the latest message that is published to the topic and waits on new messages to be published.|
+|pendingReadOps| The counter for how many outstanding read requests to the BookKeepers you have in progress.|
+|messagesConsumedCounter| Number of messages this cursor acks since this broker loads this topic.|
+|cursorLedger| The ledger used to persistently store the current markDeletePosition.|
+|cursorLedgerLastEntry| The last entryid used to persistently store the current markDeletePosition.|
+|individuallyDeletedMessages| If Acks are done out of order, shows the ranges of messages Acked between the markDeletePosition and the read-position.|
+|lastLedgerSwitchTimestamp| The last time the cursor ledger is rolled over.|
diff --git a/site2/website/versioned_docs/version-2.7.0/administration-upgrade.md b/site2/website/versioned_docs/version-2.7.0/administration-upgrade.md
new file mode 100644
index 0000000..4f2f21a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/administration-upgrade.md
@@ -0,0 +1,151 @@
+---
+id: version-2.7.0-administration-upgrade
+title: Upgrade Guide
+sidebar_label: Upgrade
+original_id: administration-upgrade
+---
+
+## Upgrade guidelines
+
+Apache Pulsar is comprised of multiple components, ZooKeeper, bookies, and brokers. These components are either stateful or stateless. You do not have to upgrade ZooKeeper nodes unless you have special requirement. While you upgrade, you need to pay attention to bookies (stateful), brokers and proxies (stateless).
+
+The following are some guidelines on upgrading a Pulsar cluster. Read the guidelines before upgrading.
+
+- Backup all your configuration files before upgrading.
+- Read guide entirely, make a plan, and then execute the plan. When you make upgrade plan, you need to take your specific requirements and environment into consideration.   
+- Pay attention to the upgrading order of components. In general, you do not need to upgrade your ZooKeeper or configuration store cluster (the global ZooKeeper cluster). You need to upgrade bookies first, and then upgrade brokers, proxies, and your clients. 
+- If `autorecovery` is enabled, you need to disable `autorecovery` in the upgrade process, and re-enable it after completing the process.
+- Read the release notes carefully for each release. Release notes contain features, configuration changes that might impact your upgrade.
+- Upgrade a small subset of nodes of each type to canary test the new version before upgrading all nodes of that type in the cluster. When you have upgraded the canary nodes, run for a while to ensure that they work correctly.
+- Upgrade one data center to verify new version before upgrading all data centers if your cluster runs in multi-cluster replicated mode.
+
+> Note: Currently, Apache Pulsar is compatible between versions. 
+
+## Upgrade sequence
+
+To upgrade an Apache Pulsar cluster, follow the upgrade sequence.
+
+1. Upgrade ZooKeeper (optional)  
+- Canary test: test an upgraded version in one or a small set of ZooKeeper nodes.  
+- Rolling upgrade: rollout the upgraded version to all ZooKeeper servers incrementally, one at a time. Monitor your dashboard during the whole rolling upgrade process.
+2. Upgrade bookies  
+- Canary test: test an upgraded version in one or a small set of bookies.
+- Rolling upgrade:  
+    - a. Disable `autorecovery` with the following command.
+       ```shell
+       bin/bookkeeper shell autorecovery -disable
+       ```  
+    - b. Rollout the upgraded version to all bookies in the cluster after you determine that a version is safe after canary.  
+    - c. After you upgrade all bookies, re-enable `autorecovery` with the following command.
+       ```shell
+       bin/bookkeeper shell autorecovery -enable
+       ```
+3. Upgrade brokers
+- Canary test: test an upgraded version in one or a small set of brokers.
+- Rolling upgrade: rollout the upgraded version to all brokers in the cluster after you determine that a version is safe after canary.
+4. Upgrade proxies
+- Canary test: test an upgraded version in one or a small set of proxies.
+- Rolling upgrade: rollout the upgraded version to all proxies in the cluster after you determine that a version is safe after canary.
+
+## Upgrade ZooKeeper (optional)
+While you upgrade ZooKeeper servers, you can do canary test first, and then upgrade all ZooKeeper servers in the cluster.
+
+### Canary test
+
+You can test an upgraded version in one of ZooKeeper servers before upgrading all ZooKeeper servers in your cluster.
+
+To upgrade ZooKeeper server to a new version, complete the following steps:
+
+1. Stop a ZooKeeper server.
+2. Upgrade the binary and configuration files.
+3. Start the ZooKeeper server with the new binary files.
+4. Use `pulsar zookeeper-shell` to connect to the newly upgraded ZooKeeper server and run a few commands to verify if it works as expected.
+5. Run the ZooKeeper server for a few days, observe and make sure the ZooKeeper cluster runs well.
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic ZooKeeper node, revert the binary and configuration, and restart the ZooKeeper with the reverted binary.
+
+### Upgrade all ZooKeeper servers
+
+After canary test to upgrade one ZooKeeper in your cluster, you can upgrade all ZooKeeper servers in your cluster. 
+
+You can upgrade all ZooKeeper servers one by one by following steps in canary test.
+
+## Upgrade bookies
+
+While you upgrade bookies, you can do canary test first, and then upgrade all bookies in the cluster.
+For more details, you can read Apache BookKeeper [Upgrade guide](http://bookkeeper.apache.org/docs/latest/admin/upgrade).
+
+### Canary test
+
+You can test an upgraded version in one or a small set of bookies before upgrading all bookies in your cluster.
+
+To upgrade bookie to a new version, complete the following steps:
+
+1. Stop a bookie.
+2. Upgrade the binary and configuration files.
+3. Start the bookie in `ReadOnly` mode to verify if the bookie of this new version runs well for read workload.
+   ```shell
+   bin/pulsar bookie --readOnly
+   ```
+4. When the bookie runs successfully in `ReadOnly` mode, stop the bookie and restart it in `Write/Read` mode.
+   ```shell
+   bin/pulsar bookie
+   ```
+5. Observe and make sure the cluster serves both write and read traffic.
+
+#### Canary rollback
+
+If issues occur during the canary test, you can shut down the problematic bookie node. Other bookies in the cluster replaces this problematic bookie node with autorecovery. 
+
+### Upgrade all bookies
+
+After canary test to upgrade some bookies in your cluster, you can upgrade all bookies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, upgrade one bookie at a time. In a downtime upgrade scenario, shut down the entire cluster, upgrade each bookie, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each bookie.
+
+1. Stop the bookie. 
+2. Upgrade the software (either new binary or new configuration files).
+2. Start the bookie.
+
+> **Advanced operations**   
+> When you upgrade a large BookKeeper cluster in a rolling upgrade scenario, upgrading one bookie at a time is slow. If you configure rack-aware or region-aware placement policy, you can upgrade bookies rack by rack or region by region, which speeds up the whole upgrade process.
+
+## Upgrade brokers and proxies
+
+The upgrade procedure for brokers and proxies is the same. Brokers and proxies are `stateless`, so upgrading the two services is easy.
+
+### Canary test
+
+You can test an upgraded version in one or a small set of nodes before upgrading all nodes in your cluster.
+
+To upgrade to a new version, complete the following steps:
+
+1. Stop a broker (or proxy).
+2. Upgrade the binary and configuration file.
+3. Start a broker (or proxy).
+
+#### Canary rollback
+
+If issues occur during canary test, you can shut down the problematic broker (or proxy) node. Revert to the old version and restart the broker (or proxy).
+
+### Upgrade all brokers or proxies
+
+After canary test to upgrade some brokers or proxies in your cluster, you can upgrade all brokers or proxies in your cluster. 
+
+Before upgrading, you have to decide whether to upgrade the whole cluster at once, including downtime and rolling upgrade scenarios.
+
+In a rolling upgrade scenario, you can upgrade one broker or one proxy at a time if the size of the cluster is small. If your cluster is large, you can upgrade brokers or proxies in batches. When you upgrade a batch of brokers or proxies, make sure the remaining brokers and proxies in the cluster have enough capacity to handle the traffic during upgrade.
+
+In a downtime upgrade scenario, shut down the entire cluster, upgrade each broker or proxy, and then start the cluster.
+
+While you upgrade in both scenarios, the procedure is the same for each broker or proxy.
+
+1. Stop the broker or proxy. 
+2. Upgrade the software (either new binary or new configuration files).
+3. Start the broker or proxy.
diff --git a/site2/website/versioned_docs/version-2.7.0/client-libraries-cgo.md b/site2/website/versioned_docs/version-2.7.0/client-libraries-cgo.md
new file mode 100644
index 0000000..da9e085
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/client-libraries-cgo.md
@@ -0,0 +1,545 @@
+---
+id: version-2.7.0-client-libraries-cgo
+title: Pulsar CGo client
+sidebar_label: CGo(deprecated)
+original_id: client-libraries-cgo
+---
+
+You can use Pulsar Go client to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+All the methods in [producers](#producers), [consumers](#consumers), and [readers](#readers) of a Go client are thread-safe.
+
+Currently, the following Go clients are maintained in two repositories.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| CGo | [pulsar-client-go](https://github.com/apache/pulsar/tree/master/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | CGo client that depends on C++ client library |
+| Go | [pulsar-client-go](https://github.com/apache/pulsar-client-go) | [Apache Pulsar](https://github.com/apache/pulsar) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+
+> #### API docs available as well
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Install go package
+
+> #### Compatibility Warning
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
+
+```bash
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v{{pulsar:version}}
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+
+```go
+import (
+    "log"
+    "runtime"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+        OperationTimeoutSeconds: 5,
+        MessageListenerThreads: runtime.NumCPU(),
+    })
+
+    if err != nil {
+        log.Fatalf("Could not instantiate Pulsar client: %v", err)
+    }
+}
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic: "my-topic",
+})
+
+if err != nil {
+    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(context.Background(), msg); err != nil {
+    log.Fatalf("Producer could not send message: %v", err)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAndGetMsgID(context.Context, ProducerMessage)`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | (MessageID, error)
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`SendAndGetMsgIDAsync(context.Context, ProducerMessage, func(MessageID, error))`| Send a message in asynchronous mode. The callback will report back the message being published and the eventual error in publishing | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+`Schema()` | | Schema
+
+Here's a more involved example usage of a producer:
+
+```go
+import (
+    "context"
+    "fmt"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client to instantiate a producer
+    producer, err := client.CreateProducer(pulsar.ProducerOptions{
+        Topic: "my-topic",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    ctx := context.Background()
+
+    // Send 10 messages synchronously and 10 messages asynchronously
+    for i := 0; i < 10; i++ {
+        // Create a message
+        msg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("message-%d", i)),
+        }
+
+        // Attempt to send the message
+        if err := producer.Send(ctx, msg); err != nil {
+            log.Fatal(err)
+        }
+
+        // Create a different message to send asynchronously
+        asyncMsg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+        }
+
+        // Attempt to send the message asynchronously and handle the response
+        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+            if err != nil { log.Fatal(err) }
+
+            fmt.Printf("the %s successfully published", string(msg.Payload))
+        })
+    }
+}
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`Properties`| Attach a set of application defined properties to the producer. This properties will be visible in the topic stats | 
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | Set the number of max pending messages across all the partitions. This setting will be used to lower the max pending messages for each partition `MaxPendingMessages(int)`, if the total exceeds the configured value.|
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `puls [...]
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4), [`ZLIB`](https://zlib.net/), [`ZSTD`](https://facebook.github.io/zstd/) and [`SNAPPY`](https://google.github.io/snappy/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+`Batching` | Control whether automatic batching of messages is enabled for the producer. | false
+`BatchingMaxPublishDelay` | Set the time period within which the messages sent will be batched (default: 10ms) if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or until | 10ms
+`BatchingMaxMessages` | Set the maximum number of messages permitted in a batch. (default: 1000) If set to a value greater than 1, messages will be queued until this threshold is reached or batch interval has elapsed | 1000
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+    Topic:            "my-topic",
+    SubscriptionName: "my-subscription-1",
+    Type:             pulsar.Exclusive,
+    MessageChannel:   msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+    log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range msgChannel {
+    msg := cm.Message
+
+    fmt.Printf("Message ID: %s", msg.ID())
+    fmt.Printf("Message value: %s", string(msg.Payload()))
+
+    consumer.Ack(msg)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type. | `error`
+`AckCumulativeID(MessageID)` |Ack the reception of all the messages in the stream up to (and including) the provided message. This method will block until the acknowledge has been sent to the broker. After that, the messages will not be re-delivered to this consumer. | error
+`Nack(Message)` | Acknowledge the failure to process a single message. | `error`
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | error
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client object to instantiate a consumer
+    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+        Topic:            "my-golang-topic",
+        SubscriptionName: "sub-1",
+        Type: pulsar.Exclusive,
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    defer consumer.Close()
+
+    ctx := context.Background()
+
+    // Listen indefinitely on the topic
+    for {
+        msg, err := consumer.Receive(ctx)
+        if err != nil { log.Fatal(err) }
+
+        // Do something with the message
+        err = processMessage(msg)
+
+        if err == nil {
+            // Message processed successfully
+            consumer.Ack(msg)
+        } else {
+            // Failed to process messages
+            consumer.Nack(msg)
+        }
+    }
+}
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`Topics` | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`TopicsPattern` | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing |
+`SubscriptionName` | The subscription name for this consumer |
+`Properties` | Attach a set of application defined properties to the consumer. This properties will be visible in the topic stats|
+`Name` | The name of the consumer |
+`AckTimeout` | Set the timeout for unacked messages | 0
+`NackRedeliveryDelay` | The delay after which to redeliver the messages that failed to be processed. Default is 1min. (See `Consumer.Nack()`) | 1 minute
+`Type` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`SubscriptionInitPos` | InitialPosition at which the cursor will be set when subscribe | Latest
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+`ReadCompacted` | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the consumer will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal. |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic: "my-golang-topic",
+    StartMessageId: pulsar.LatestMessage,
+})
+```
+
+> #### Blocking operation
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+    // Use the client to instantiate a reader
+    reader, err := client.CreateReader(pulsar.ReaderOptions{
+        Topic:          "my-golang-topic",
+        StartMessageID: pulsar.EarliestMessage,
+    })
+
+    if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+    defer reader.Close()
+
+    ctx := context.Background()
+
+    // Listen on the topic for incoming messages
+    for {
+        msg, err := reader.Next(ctx)
+        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+        // Process the message
+    }
+}
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: DeserializeMessageID(lastSavedId),
+})
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages 
+`Name` | The name of the reader 
+`StartMessageID` | THe initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+`ReadCompacted` | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic. This means that, if the topic has been compacted, the reader will only see the latest value for each key in the topic, up until the point in the topic message backlog that has been compacted. Beyond that point, the messages will be sent as normal.|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Value` | Value and payload is mutually exclusive, `Value interface{}` for schema message.
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+```
+
+## Schema
+
+This example shows how to create a producer and consumer with schema.
+
+```go
+var exampleSchemaDef = "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\"," +
+    		"\"fields\":[{\"name\":\"ID\",\"type\":\"int\"},{\"name\":\"Name\",\"type\":\"string\"}]}"
+jsonSchema := NewJsonSchema(exampleSchemaDef, nil)
+// create producer
+producer, err := client.CreateProducerWithSchema(ProducerOptions{
+	Topic: "jsonTopic",
+}, jsonSchema)
+err = producer.Send(context.Background(), ProducerMessage{
+	Value: &testJson{
+		ID:   100,
+		Name: "pulsar",
+	},
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+//create consumer
+var s testJson
+consumerJS := NewJsonSchema(exampleSchemaDef, nil)
+consumer, err := client.SubscribeWithSchema(ConsumerOptions{
+	Topic:            "jsonTopic",
+	SubscriptionName: "sub-2",
+}, consumerJS)
+if err != nil {
+	log.Fatal(err)
+}
+msg, err := consumer.Receive(context.Background())
+if err != nil {
+	log.Fatal(err)
+}
+err = msg.GetValue(&s)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(s.ID) // output: 100
+fmt.Println(s.Name) // output: pulsar
+defer consumer.Close()
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.7.0/client-libraries-cpp.md
new file mode 100644
index 0000000..8c31f80
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/client-libraries-cpp.md
@@ -0,0 +1,253 @@
+---
+id: version-2.7.0-client-libraries-cpp
+title: Pulsar C++ client
+sidebar_label: C++
+original_id: client-libraries-cpp
+---
+
+You can use Pulsar C++ client to create Pulsar producers and consumers in C++.
+
+All the methods in producer, consumer, and reader of a C++ client are thread-safe.
+
+## Supported platforms
+
+Pulsar C++ client is supported on **Linux** and **MacOS** platforms.
+
+[Doxygen](http://www.doxygen.nl/)-generated API docs for the C++ client are available [here](/api/cpp).
+
+## Linux
+
+> Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly.
+
+Four kind of libraries `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` are included in your `/usr/lib` after rpm/deb download and install.
+By default, they are build under code path `${PULSAR_HOME}/pulsar-client-cpp`, using command
+ `cmake . -DBUILD_TESTS=OFF -DLINK_STATIC=ON && make pulsarShared pulsarSharedNossl pulsarStatic pulsarStaticWithDeps -j 3`
+These libraries rely on some other libraries, if you want to get detailed version of dependencies libraries, please reference [these](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/rpm/Dockerfile) [files](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/pkg/deb/Dockerfile).
+
+1. `libpulsar.so` is the Shared library, it contains statically linked `boost` and `openssl`, and will also dynamically link all other needed libraries.
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsar.so -I/usr/local/ssl/include
+```
+
+2. `libpulsarnossl.so` is the Shared library that similar to `libpulsar.so` except that the library `openssl` and `crypto` are dynamically linked.
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsarnossl.so -lssl -lcrypto -I/usr/local/ssl/include -L/usr/local/ssl/lib
+```
+
+3. `libpulsar.a` is the Static library, it need to load some dependencies library when using it. 
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsar.a -lssl -lcrypto -ldl -lpthread  -I/usr/local/ssl/include -L/usr/local/ssl/lib -lboost_system -lboost_regex -lcurl -lprotobuf -lzstd -lz
+```
+
+4. `libpulsarwithdeps.a` is the Static library, base on `libpulsar.a`, and archived in the dependencies libraries of `libboost_regex`,  `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz`, 
+The command the when use this pulsar library is like this:
+```bash
+ g++ --std=c++11  PulsarTest.cpp -o test /usr/lib/libpulsarwithdeps.a -lssl -lcrypto -ldl -lpthread  -I/usr/local/ssl/include -L/usr/local/ssl/lib
+```
+`libpulsarwithdeps.a` does not include library openssl related libraries: `libssl` and `libcrypto`, because these 2 library is related to security, 
+by using user local system provided version is more reasonable, and more easy for user to handling security issue and library upgrade.
+
+### Install RPM
+
+1. Download a RPM package from the links in the table. 
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:dist_rpm:client}}) | [asc]({{pulsar:dist_rpm:client}}.asc), [sha512]({{pulsar:dist_rpm:client}}.sha512) |
+| [client-debuginfo]({{pulsar:dist_rpm:client-debuginfo}}) | [asc]({{pulsar:dist_rpm:client-debuginfo}}.asc),  [sha512]({{pulsar:dist_rpm:client-debuginfo}}.sha512) |
+| [client-devel]({{pulsar:dist_rpm:client-devel}}) | [asc]({{pulsar:dist_rpm:client-devel}}.asc),  [sha512]({{pulsar:dist_rpm:client-devel}}.sha512) |
+
+2. Install the package using the following command.
+
+```bash
+$ rpm -ivh apache-pulsar-client*.rpm
+```
+
+After install, Pulsar libraries will be placed under `/usr/lib`.
+
+### Install Debian
+
+1. Download a Debian package from the links in the table. 
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:deb:client}}) | [asc]({{pulsar:dist_deb:client}}.asc), [sha512]({{pulsar:dist_deb:client}}.sha512) |
+| [client-devel]({{pulsar:deb:client-devel}}) | [asc]({{pulsar:dist_deb:client-devel}}.asc),  [sha512]({{pulsar:dist_deb:client-devel}}.sha512) |
+
+2. Install the package using the following command:
+
+```bash
+$ apt install ./apache-pulsar-client*.deb
+```
+
+After install, Pulsar libraries will be placed under `/usr/lib`.
+
+### Build
+
+> If you want to build RPM and Debian packages from the latest master, follow the instructions below. All the instructions are run at the root directory of your cloned Pulsar repository.
+
+There are recipes that build RPM and Debian packages containing a
+statically linked `libpulsar.so` / `libpulsarnossl.so` / `libpulsar.a` / `libpulsarwithdeps.a` with all the required
+dependencies.
+
+To build the C++ library packages, build the Java packages first.
+
+```shell
+mvn install -DskipTests
+```
+
+#### RPM
+
+```shell
+pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh
+```
+
+This builds the RPM inside a Docker container and it leaves the RPMs in `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/`.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-devel | Static library `libpulsar.a`, `libpulsarwithdeps.a`and C++ and C headers |
+| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` |
+
+#### Debian
+
+To build Debian packages, enter the following command.
+
+```shell
+pulsar-client-cpp/pkg/deb/docker-build-deb.sh
+```
+
+Debian packages are created at `pulsar-client-cpp/pkg/deb/BUILD/DEB/`.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` and `libpulsarnossl.so` |
+| pulsar-client-dev | Static library `libpulsar.a`, `libpulsarwithdeps.a` and C++ and C headers |
+
+## MacOS
+
+Pulsar releases are available in the [Homebrew](https://brew.sh/) core repository. You can install the C++ client library with the following command. The package is installed with the library and headers.
+
+```shell
+brew install libpulsar
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a Pulsar protocol URL.
+
+Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is `6650`. The following is an example for localhost.
+
+```http
+pulsar://localhost:6650
+```
+
+In a Pulsar cluster in production, the URL looks as follows: 
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you use TLS authentication, you need to add `ssl`, and the default port is `6651`. The following is an example.
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a consumer
+To connect to Pulsar as a consumer, you need to create a consumer on the C++ client. The following is an example. 
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Consumer consumer;
+Result result = client.subscribe("my-topic", "my-subscription-name", consumer);
+if (result != ResultOk) {
+    LOG_ERROR("Failed to subscribe: " << result);
+    return -1;
+}
+
+Message msg;
+
+while (true) {
+    consumer.receive(msg);
+    LOG_INFO("Received: " << msg
+            << "  with payload '" << msg.getDataAsString() << "'");
+
+    consumer.acknowledge(msg);
+}
+
+client.close();
+```
+
+## Create a producer
+To connect to Pulsar as a producer, you need to create a producer on the C++ client. The following is an example. 
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Producer producer;
+Result result = client.createProducer("my-topic", producer);
+if (result != ResultOk) {
+    LOG_ERROR("Error creating producer: " << result);
+    return -1;
+}
+
+// Publish 10 messages to the topic
+for (int i = 0; i < 10; i++){
+    Message msg = MessageBuilder().setContent("my-message").build();
+    Result res = producer.send(msg);
+    LOG_INFO("Message sent: " << res);
+}
+client.close();
+```
+
+## Enable authentication in connection URLs
+If you use TLS authentication when connecting to Pulsar, you need to add `ssl` in the connection URLs, and the default port is `6651`. The following is an example.
+
+```cpp
+ClientConfiguration config = ClientConfiguration();
+config.setUseTls(true);
+config.setTlsTrustCertsFilePath("/path/to/cacert.pem");
+config.setTlsAllowInsecureConnection(false);
+config.setAuth(pulsar::AuthTls::create(
+            "/path/to/client-cert.pem", "/path/to/client-key.pem"););
+
+Client client("pulsar+ssl://my-broker.com:6651", config);
+```
+
+For complete examples, refer to [C++ client examples](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/examples).
+
+## Schema
+
+This section describes some examples about schema. For more information about schema, see [Pulsar schema](schema-get-started.md).
+
+### Create producer with Avro schema
+
+The following example shows how to create a producer with an Avro schema.
+
+```cpp
+static const std::string exampleSchema =
+    "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\","
+    "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}";
+Producer producer;
+ProducerConfiguration producerConf;
+producerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema));
+client.createProducer("topic-avro", producerConf, producer);
+```
+
+### Create consumer with Avro schema
+
+The following example shows how to create a consumer with an Avro schema.
+
+```cpp
+static const std::string exampleSchema =
+    "{\"type\":\"record\",\"name\":\"Example\",\"namespace\":\"test\","
+    "\"fields\":[{\"name\":\"a\",\"type\":\"int\"},{\"name\":\"b\",\"type\":\"int\"}]}";
+ConsumerConfiguration consumerConf;
+Consumer consumer;
+consumerConf.setSchema(SchemaInfo(AVRO, "Avro", exampleSchema));
+client.subscribe("topic-avro", "sub-2", consumerConf, consumer)
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/client-libraries-dotnet.md b/site2/website/versioned_docs/version-2.7.0/client-libraries-dotnet.md
new file mode 100644
index 0000000..1c6111c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/client-libraries-dotnet.md
@@ -0,0 +1,430 @@
+---
+id: version-2.7.0-client-libraries-dotnet
+title: Pulsar C# client
+sidebar_label: C#
+original_id: client-libraries-dotnet
+---
+
+You can use the Pulsar C# client to create Pulsar producers and consumers in C#. All the methods in the producer, consumer, and reader of a C# client are thread-safe.
+
+## Installation
+
+You can install the Pulsar C# client library either through the dotnet CLI or through the Visual Studio. This section describes how to install the Pulsar C# client library through the dotnet CLI. For information about how to install the Pulsar C# client library through the Visual Studio , see [here](https://docs.microsoft.com/en-us/visualstudio/mac/nuget-walkthrough?view=vsmac-2019).
+
+### Prerequisites
+
+Install the [.NET Core SDK](https://dotnet.microsoft.com/download/), which provides the dotnet command-line tool. Starting in Visual Studio 2017, the dotnet CLI is automatically installed with any .NET Core related workloads.
+
+### Procedures
+
+To install the Pulsar C# client library, following these steps:
+
+1. Create a project.
+
+   1. Create a folder for the project.
+
+   2. Open a terminal window and switch to the new folder.
+
+   3. Create the project using the following command.
+
+        ```
+        dotnet new console
+        ```
+
+   4. Use `dotnet run` to test that the app has been created properly.
+
+2. Add the Newtonsoft.Json NuGet package.
+
+   1. Use the following command to install the `Newtonsoft.json` package:
+
+        ```
+        dotnet add package Newtonsoft.Json
+        ```
+
+   2. After the command completes, open the `.csproj` file to see the added reference:
+
+        ```xml
+        <ItemGroup>
+        <PackageReference Include="Newtonsoft.Json" Version="12.0.1" />
+        </ItemGroup>
+        ```
+
+3. Use the Newtonsoft.Json API in the app.
+
+   1. Open the `Program.cs` file and add the following line at the top of the file:
+
+        ```c#
+        using Newtonsoft.Json;
+        ```
+
+   2. Add the following code before the `class Program` line:
+
+        ```c#
+        public class Account
+        {
+        public string Name { get; set; }
+        public string Email { get; set; }
+        public DateTime DOB { get; set; }
+        }
+        ```
+
+   3. Replace the `Main` function with the following:
+
+        ```c#
+        static void Main(string[] args)
+        {
+            Account account = new Account
+            {
+                Name = "John Doe",
+                Email = "john@nuget.org",
+                DOB = new DateTime(1980, 2, 20, 0, 0, 0, DateTimeKind.Utc),
+            };
+
+            string json = JsonConvert.SerializeObject(account, Formatting.Indented);
+            Console.WriteLine(json);
+        }
+        ```
+   4. Build and run the app by using the `dotnet run` command. The output should be the JSON representation of the `Account` object in the code:
+
+        ```output
+        {
+        "Name": "John Doe",
+        "Email": "john@nuget.org",
+        "DOB": "1980-02-20T00:00:00Z"
+        }
+        ```
+
+## Client
+
+This section describes some configuration examples for the Pulsar C# client.
+
+### Create client
+
+This example shows how to create a Pulsar C# client connected to the local host.
+
+```c#
+var client = PulsarClient.Builder().Build();
+```
+
+To create a Pulsar C# client by using the builder, you need to specify the following options:
+
+| Option | Description | Default |
+| ---- | ---- | ---- |
+| ServiceUrl | Set the service URL for the Pulsar cluster. | pulsar://localhost:6650 |
+| RetryInterval | Set the time to wait before retrying an operation or a reconnection. | 3s |
+
+### Create producer
+
+This section describes how to create a producer.
+
+- Create a producer by using the builder.
+
+    ```c#
+    var producer = client.NewProducer()
+                        .Topic("persistent://public/default/mytopic")
+                        .Create();
+    ```
+
+- Create a producer without using the builder.
+
+    ```c#
+    var options = new ProducerOptions("persistent://public/default/mytopic");
+    var producer = client.CreateProducer(options);
+    ```
+
+### Create consumer
+
+This section describes how to create a consumer.
+
+- Create a consumer by using the builder.
+
+    ```c#
+    var consumer = client.NewConsumer()
+                        .SubscriptionName("MySubscription")
+                        .Topic("persistent://public/default/mytopic")
+                        .Create();
+    ```
+
+- Create a consumer without using the builder.
+
+    ```c#
+    var options = new ConsumerOptions("MySubscription", "persistent://public/default/mytopic");
+    var consumer = client.CreateConsumer(options);
+    ```
+
+### Create reader
+
+This section describes how to create a reader.
+
+- Create a reader by using the builder.
+
+    ```c#
+    var reader = client.NewReader()
+                    .StartMessageId(MessageId.Earliest)
+                    .Topic("persistent://public/default/mytopic")
+                    .Create();
+    ```
+
+- Create a reader without using the builder.
+
+    ```c#
+    var options = new ReaderOptions(MessageId.Earliest, "persistent://public/default/mytopic");
+    var reader = client.CreateReader(options);
+    ```
+
+### Configure encryption policies
+
+The Pulsar C# client supports four kinds of encryption policies:
+
+- `EnforceUnencrypted`: always use unencrypted connections.
+- `EnforceEncrypted`: always use encrypted connections)
+- `PreferUnencrypted`: use unencrypted connections, if possible.
+- `PreferEncrypted`: use encrypted connections, if possible.
+
+This example shows how to set the `EnforceUnencrypted` encryption policy.
+
+```c#
+var client = PulsarClient.Builder()
+                         .ConnectionSecurity(EncryptionPolicy.EnforceEncrypted)
+                         .Build();
+```
+
+### Configure authentication
+
+Currently, the Pulsar C# client supports the TLS (Transport Layer Security) and JWT (JSON Web Token) authentication.
+
+If you have followed [Authentication using TLS](security-tls-authentication.md), you get a certificate and a key. To use them from the Pulsar C# client, follow these steps:
+
+1. Create an unencrypted and password-less pfx file.
+
+    ```c#
+    openssl pkcs12 -export -keypbe NONE -certpbe NONE -out admin.pfx -inkey admin.key.pem -in admin.cert.pem -passout pass:
+    ```
+
+2. Use the admin.pfx file to create an X509Certificate2 and pass it to the Pulsar C# client.
+
+    ```c#
+    var clientCertificate = new X509Certificate2("admin.pfx");
+    var client = PulsarClient.Builder()
+                            .AuthenticateUsingClientCertificate(clientCertificate)
+                            .Build();
+    ```
+
+## Producer
+
+A producer is a process that attaches to a topic and publishes messages to a Pulsar broker for processing. This section describes some configuration examples about the producer.
+
+## Send data
+
+This example shows how to send data.
+
+```c#
+var data = Encoding.UTF8.GetBytes("Hello World");
+await producer.Send(data);
+```
+
+### Send messages with customized metadata
+
+- Send messages with customized metadata by using the builder.
+
+    ```c#
+    var data = Encoding.UTF8.GetBytes("Hello World");
+    var messageId = await producer.NewMessage()
+                                .Property("SomeKey", "SomeValue")
+                                .Send(data);
+    ```
+
+- Send messages with customized metadata without using the builder.
+
+    ```c#
+    var data = Encoding.UTF8.GetBytes("Hello World");
+    var metadata = new MessageMetadata();
+    metadata["SomeKey"] = "SomeValue";
+    var messageId = await producer.Send(metadata, data));
+    ```
+
+## Consumer
+
+A consumer is a process that attaches to a topic through a subscription and then receives messages. This section describes some configuration examples about the consumer.
+
+### Receive messages
+
+This example shows how a consumer receives messages from a topic.
+
+```c#
+await foreach (var message in consumer.Messages())
+{
+    Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+}
+```
+
+### Acknowledge messages
+
+Messages can be acknowledged individually or cumulatively. For details about message acknowledgement, see [acknowledgement](concepts-messaging.md#acknowledgement).
+
+- Acknowledge messages individually.
+
+    ```c#
+    await foreach (var message in consumer.Messages())
+    {
+        Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+    }
+    ```
+
+- Acknowledge messages cumulatively.
+
+    ```c#
+    await consumer.AcknowledgeCumulative(message);
+    ```
+
+### Unsubscribe from topics
+
+This example shows how a consumer unsubscribes from a topic.
+
+```c#
+await consumer.Unsubscribe();
+```
+
+#### Note
+
+> A consumer cannot be used and is disposed once the consumer unsubscribes from a topic.
+
+## Reader
+
+A reader is actually just a consumer without a cursor. This means that Pulsar does not keep track of your progress and there is no need to acknowledge messages.
+
+This example shows how a reader receives messages.
+
+```c#
+await foreach (var message in reader.Messages())
+{
+    Console.WriteLine("Received: " + Encoding.UTF8.GetString(message.Data.ToArray()));
+}
+```
+
+## Monitoring
+
+This section describes how to monitor the producer, consumer, and reader state.
+
+### Monitor producer
+
+The following table lists states available for the producer.
+
+| State | Description |
+| ---- | ----|
+| Closed | The producer or the Pulsar client has been disposed. |
+| Connected | All is well. |
+| Disconnected | The connection is lost and attempts are being made to reconnect. |
+| Faulted | An unrecoverable error has occurred. |
+
+This example shows how to monitor the producer state.
+
+```c#
+private static async ValueTask Monitor(IProducer producer, CancellationToken cancellationToken)
+{
+    var state = ProducerState.Disconnected;
+
+    while (!cancellationToken.IsCancellationRequested)
+    {
+        state = await producer.StateChangedFrom(state, cancellationToken);
+
+        var stateMessage = state switch
+        {
+            ProducerState.Connected => $"The producer is connected",
+            ProducerState.Disconnected => $"The producer is disconnected",
+            ProducerState.Closed => $"The producer has closed",
+            ProducerState.Faulted => $"The producer has faulted",
+            _ => $"The producer has an unknown state '{state}'"
+        };
+
+        Console.WriteLine(stateMessage);
+
+        if (producer.IsFinalState(state))
+            return;
+    }
+}
+```
+
+### Monitor consumer state
+
+The following table lists states available for the consumer.
+
+| State | Description |
+| ---- | ----|
+| Active | All is well. |
+| Inactive | All is well. The subscription type is `Failover` and you are not the active consumer. |
+| Closed | The consumer or the Pulsar client has been disposed. |
+| Disconnected | The connection is lost and attempts are being made to reconnect. |
+| Faulted | An unrecoverable error has occurred. |
+| ReachedEndOfTopic | No more messages are delivered. |
+
+This example shows how to monitor the consumer state.
+
+```c#
+private static async ValueTask Monitor(IConsumer consumer, CancellationToken cancellationToken)
+{
+    var state = ConsumerState.Disconnected;
+
+    while (!cancellationToken.IsCancellationRequested)
+    {
+        state = await consumer.StateChangedFrom(state, cancellationToken);
+
+        var stateMessage = state switch
+        {
+            ConsumerState.Active => "The consumer is active",
+            ConsumerState.Inactive => "The consumer is inactive",
+            ConsumerState.Disconnected => "The consumer is disconnected",
+            ConsumerState.Closed => "The consumer has closed",
+            ConsumerState.ReachedEndOfTopic => "The consumer has reached end of topic",
+            ConsumerState.Faulted => "The consumer has faulted",
+            _ => $"The consumer has an unknown state '{state}'"
+        };
+
+        Console.WriteLine(stateMessage);
+
+        if (consumer.IsFinalState(state))
+            return;
+    }
+}
+```
+
+### Monitor reader state
+
+The following table lists states available for the reader.
+
+| State | Description |
+| ---- | ----|
+| Closed | The reader or the Pulsar client has been disposed. |
+| Connected | All is well. |
+| Disconnected | The connection is lost and attempts are being made to reconnect.
+| Faulted | An unrecoverable error has occurred. |
+| ReachedEndOfTopic | No more messages are delivered. |
+
+This example shows how to monitor the reader state.
+
+```c#
+private static async ValueTask Monitor(IReader reader, CancellationToken cancellationToken)
+{
+    var state = ReaderState.Disconnected;
+
+    while (!cancellationToken.IsCancellationRequested)
+    {
+        state = await reader.StateChangedFrom(state, cancellationToken);
+
+        var stateMessage = state switch
+        {
+            ReaderState.Connected => "The reader is connected",
+            ReaderState.Disconnected => "The reader is disconnected",
+            ReaderState.Closed => "The reader has closed",
+            ReaderState.ReachedEndOfTopic => "The reader has reached end of topic",
+            ReaderState.Faulted => "The reader has faulted",
+            _ => $"The reader has an unknown state '{state}'"
+        };
+
+        Console.WriteLine(stateMessage);
+
+        if (reader.IsFinalState(state))
+            return;
+    }
+}
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/client-libraries-go.md b/site2/website/versioned_docs/version-2.7.0/client-libraries-go.md
new file mode 100644
index 0000000..285dae0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/client-libraries-go.md
@@ -0,0 +1,680 @@
+---
+id: version-2.7.0-client-libraries-go
+title: Pulsar Go client
+sidebar_label: Go
+original_id: client-libraries-go
+---
+
+> Tips: Currently, the CGo client will be deprecated, if you want to know more about the CGo client, please refer to [CGo client docs](client-libraries-cgo.md)
+
+You can use Pulsar [Go client](https://github.com/apache/pulsar-client-go) to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+> #### API docs available as well
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar-client-go/pulsar).
+
+
+## Installation
+
+### Install go package
+
+You can install the `pulsar` library locally using `go get`.  
+
+```bash
+$ go get -u "github.com/apache/pulsar-client-go/pulsar"
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+import "github.com/apache/pulsar-client-go/pulsar"
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Create a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+
+```go
+import (
+	"log"
+	"time"
+
+	"github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+	client, err := pulsar.NewClient(pulsar.ClientOptions{
+		URL:               "pulsar://localhost:6650",
+		OperationTimeout:  30 * time.Second,
+		ConnectionTimeout: 30 * time.Second,
+	})
+	if err != nil {
+		log.Fatalf("Could not instantiate Pulsar client: %v", err)
+	}
+
+	defer client.Close()
+}
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| URL | Configure the service URL for the Pulsar service. This parameter is required | |
+| ConnectionTimeout | Timeout for the establishment of a TCP connection | 30s |
+| OperationTimeout| Set the operation timeout. Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed| 30s|
+| Authentication | Configure the authentication provider. Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | no authentication |
+| TLSTrustCertsFilePath | Set the path to the trusted TLS certificate file | |
+| TLSAllowInsecureConnection | Configure whether the Pulsar client accept untrusted TLS certificate from broker | false |
+| TLSValidateHostname | Configure whether the Pulsar client verify the validity of the host name from broker | false |
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+	Topic: "my-topic",
+})
+
+_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{
+	Payload: []byte("hello"),
+})
+
+defer producer.Close()
+
+if err != nil {
+	fmt.Println("Failed to publish message", err)
+}
+fmt.Println("Published message")
+```
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetches the producer's name | `string`
+`Send(context.Context, *ProducerMessage)` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | (MessageID, error)
+`SendAsync(context.Context, *ProducerMessage, func(MessageID, *ProducerMessage, error))`| Send a message, this call will be blocking until is successfully acknowledged by the Pulsar broker. | 
+`LastSequenceID()` | Get the last sequence id that was published by this producer. his represent either the automatically assigned or custom sequence id (set on the ProducerMessage) that was published and acknowledged by the broker. | int64
+`Flush()`| Flush all the messages buffered in the client and wait until all messages have been successfully persisted. | error
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | 
+
+### Producer Example
+
+#### How to use message router in producer
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: serviceURL,
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+// Only subscribe on the specific partition
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topic:            "my-partitioned-topic-partition-2",
+	SubscriptionName: "my-sub",
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+producer, err := client.CreateProducer(ProducerOptions{
+	Topic: "my-partitioned-topic",
+	MessageRouter: func(msg *ProducerMessage, tm TopicMetadata) int {
+		fmt.Println("Routing message ", msg, " -- Partitions: ", tm.NumPartitions())
+		return 2
+	},
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+```
+
+#### How to use delay relative in producer
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: "pulsar://localhost:6650",
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+topicName := newTopicName()
+producer, err := client.CreateProducer(ProducerOptions{
+	Topic: topicName,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topic:            topicName,
+	SubscriptionName: "subName",
+	Type:             Shared,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+ID, err := producer.Send(context.Background(), &ProducerMessage{
+	Payload:      []byte(fmt.Sprintf("test")),
+	DeliverAfter: 3 * time.Second,
+})
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(ID)
+
+ctx, canc := context.WithTimeout(context.Background(), 1*time.Second)
+msg, err := consumer.Receive(ctx)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(msg.Payload())
+canc()
+
+ctx, canc = context.WithTimeout(context.Background(), 5*time.Second)
+msg, err = consumer.Receive(ctx)
+if err != nil {
+	log.Fatal(err)
+}
+fmt.Println(msg.Payload())
+canc()
+```
+
+
+### Producer configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Name | Name specify a name for the producer. If not assigned, the system will generate a globally unique name which can be access with Producer.ProducerName(). | | 
+| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | |
+| MaxPendingMessages| MaxPendingMessages set the max size of the queue holding the messages pending to receive an acknowledgment from the broker. | |
+| HashingScheme | HashingScheme change the `HashingScheme` used to chose the partition on where to publish a particular message. | JavaStringHash |
+| CompressionType | CompressionType set the compression type for the producer. | not compressed | 
+| MessageRouter | MessageRouter set a custom message routing policy by passing an implementation of MessageRouter | |
+| DisableBatching | DisableBatching control whether automatic batching of messages is enabled for the producer. | false |
+| BatchingMaxPublishDelay | BatchingMaxPublishDelay set the time period within which the messages sent will be batched | 10ms |
+| BatchingMaxMessages | BatchingMaxMessages set the maximum number of messages permitted in a batch. | 1000 | 
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+	Topic:            "topic-1",
+	SubscriptionName: "my-sub",
+	Type:             pulsar.Shared,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+for i := 0; i < 10; i++ {
+	msg, err := consumer.Receive(context.Background())
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	fmt.Printf("Received message msgId: %#v -- content: '%s'\n",
+		msg.ID(), string(msg.Payload()))
+
+	consumer.Ack(msg)
+}
+
+if err := consumer.Unsubscribe(); err != nil {
+	log.Fatal(err)
+}
+```
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | 
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | 
+`Nack(Message)` | Acknowledge the failure to process a single message. | 
+`NackID(MessageID)` | Acknowledge the failure to process a single message. | 
+`Seek(msgID MessageID)` | Reset the subscription associated with this consumer to a specific message id. The message id can either be a specific message or represent the first or last messages in the topic. | `error`
+`SeekByTime(time time.Time)` | Reset the subscription associated with this consumer to a specific message publish time. | `error`
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | 
+
+### Receive example
+
+#### How to use regx consumer
+
+```go
+client, err := pulsar.NewClient(pulsar.ClientOptions{
+    URL: "pulsar://localhost:6650",
+})
+
+defer client.Close()
+
+p, err := client.CreateProducer(ProducerOptions{
+	Topic:           topicInRegex,
+	DisableBatching: true,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer p.Close()
+
+topicsPattern := fmt.Sprintf("persistent://%s/foo.*", namespace)
+opts := ConsumerOptions{
+	TopicsPattern:    topicsPattern,
+	SubscriptionName: "regex-sub",
+}
+consumer, err := client.Subscribe(opts)
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+```
+
+#### How to use multi topics Consumer
+
+```go
+func newTopicName() string {
+	return fmt.Sprintf("my-topic-%v", time.Now().Nanosecond())
+}
+
+
+topic1 := "topic-1"
+topic2 := "topic-2"
+
+client, err := NewClient(ClientOptions{
+	URL: "pulsar://localhost:6650",
+})
+if err != nil {
+	log.Fatal(err)
+}
+topics := []string{topic1, topic2}
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topics:           topics,
+	SubscriptionName: "multi-topic-sub",
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+```
+
+#### How to use consumer listener
+
+```go
+import (
+	"fmt"
+	"log"
+
+	"github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+	client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"})
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	defer client.Close()
+
+	channel := make(chan pulsar.ConsumerMessage, 100)
+
+	options := pulsar.ConsumerOptions{
+		Topic:            "topic-1",
+		SubscriptionName: "my-subscription",
+		Type:             pulsar.Shared,
+	}
+
+	options.MessageChannel = channel
+
+	consumer, err := client.Subscribe(options)
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	defer consumer.Close()
+
+	// Receive messages from channel. The channel returns a struct which contains message and the consumer from where
+	// the message was received. It's not necessary here since we have 1 single consumer, but the channel could be
+	// shared across multiple consumers as well
+	for cm := range channel {
+		msg := cm.Message
+		fmt.Printf("Received message  msgId: %v -- content: '%s'\n",
+			msg.ID(), string(msg.Payload()))
+
+		consumer.Ack(msg)
+	}
+}
+```
+
+#### How to use consumer receive timeout
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: "pulsar://localhost:6650",
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+topic := "test-topic-with-no-messages"
+ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
+defer cancel()
+
+// create consumer
+consumer, err := client.Subscribe(ConsumerOptions{
+	Topic:            topic,
+	SubscriptionName: "my-sub1",
+	Type:             Shared,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer consumer.Close()
+
+msg, err := consumer.Receive(ctx)
+fmt.Println(msg.Payload())
+if err != nil {
+	log.Fatal(err)
+}
+```
+
+
+### Consumer configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Topics | Specify a list of topics this consumer will subscribe on. Either a topic, a list of topics or a topics pattern are required when subscribing| |
+| TopicsPattern | Specify a regular expression to subscribe to multiple topics under the same namespace. Either a topic, a list of topics or a topics pattern are required when subscribing | |
+| AutoDiscoveryPeriod | Specify the interval in which to poll for new partitions or new topics if using a TopicsPattern. | |
+| SubscriptionName | Specify the subscription name for this consumer. This argument is required when subscribing | |
+| Name | Set the consumer name | | 
+| Properties | Properties attach a set of application defined properties to the producer This properties will be visible in the topic stats | |
+| Type | Select the subscription type to be used when subscribing to the topic. | Exclusive |
+| SubscriptionInitialPosition | InitialPosition at which the cursor will be set when subscribe | Latest |
+| DLQ | Configuration for Dead Letter Queue consumer policy. | no DLQ | 
+| MessageChannel | Sets a `MessageChannel` for the consumer. When a message is received, it will be pushed to the channel for consumption | | 
+| ReceiverQueueSize | Sets the size of the consumer receive queue. | 1000| 
+| NackRedeliveryDelay | The delay after which to redeliver the messages that failed to be processed | 1min |
+| ReadCompacted | If enabled, the consumer will read messages from the compacted topic rather than reading the full message backlog of the topic | false |
+| ReplicateSubscriptionState | Mark the subscription as replicated to keep it in sync across clusters | false |
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+	Topic:          "topic-1",
+	StartMessageID: pulsar.EarliestMessageID(),
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer reader.Close()
+```
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`HasNext()` | Check if there is any message available to read from the current position| (bool, error)
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+### Reader example
+
+#### How to use reader to read 'next' message
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+import (
+	"context"
+	"fmt"
+	"log"
+
+	"github.com/apache/pulsar-client-go/pulsar"
+)
+
+func main() {
+	client, err := pulsar.NewClient(pulsar.ClientOptions{URL: "pulsar://localhost:6650"})
+	if err != nil {
+		log.Fatal(err)
+	}
+
+	defer client.Close()
+
+	reader, err := client.CreateReader(pulsar.ReaderOptions{
+		Topic:          "topic-1",
+		StartMessageID: pulsar.EarliestMessageID(),
+	})
+	if err != nil {
+		log.Fatal(err)
+	}
+	defer reader.Close()
+
+	for reader.HasNext() {
+		msg, err := reader.Next(context.Background())
+		if err != nil {
+			log.Fatal(err)
+		}
+
+		fmt.Printf("Received message msgId: %#v -- content: '%s'\n",
+			msg.ID(), string(msg.Payload()))
+	}
+}
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: pulsar.DeserializeMessageID(lastSavedId),
+})
+```
+
+#### How to use reader to read specific message
+
+```go
+client, err := NewClient(ClientOptions{
+	URL: lookupURL,
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer client.Close()
+
+topic := "topic-1"
+ctx := context.Background()
+
+// create producer
+producer, err := client.CreateProducer(ProducerOptions{
+	Topic:           topic,
+	DisableBatching: true,
+})
+if err != nil {
+	log.Fatal(err)
+}
+defer producer.Close()
+
+// send 10 messages
+msgIDs := [10]MessageID{}
+for i := 0; i < 10; i++ {
+	msgID, err := producer.Send(ctx, &ProducerMessage{
+		Payload: []byte(fmt.Sprintf("hello-%d", i)),
+	})
+	assert.NoError(t, err)
+	assert.NotNil(t, msgID)
+	msgIDs[i] = msgID
+}
+
+// create reader on 5th message (not included)
+reader, err := client.CreateReader(ReaderOptions{
+	Topic:          topic,
+	StartMessageID: msgIDs[4],
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer reader.Close()
+
+// receive the remaining 5 messages
+for i := 5; i < 10; i++ {
+	msg, err := reader.Next(context.Background())
+	if err != nil {
+	log.Fatal(err)
+}
+
+// create reader on 5th message (included)
+readerInclusive, err := client.CreateReader(ReaderOptions{
+	Topic:                   topic,
+	StartMessageID:          msgIDs[4],
+	StartMessageIDInclusive: true,
+})
+
+if err != nil {
+	log.Fatal(err)
+}
+defer readerInclusive.Close()
+```
+
+### Reader configuration
+
+ Name | Description | Default
+| :-------- | :---------- |:---------- |
+| Topic | Topic specify the topic this consumer will subscribe to. This argument is required when constructing the reader. | |
+| Name | Name set the reader name. | | 
+| Properties | Attach a set of application defined properties to the reader. This properties will be visible in the topic stats | |
+| StartMessageID | StartMessageID initial reader positioning is done by specifying a message id. | |
+| StartMessageIDInclusive | If true, the reader will start at the `StartMessageID`, included. Default is `false` and the reader will start from the "next" message | false |
+| MessageChannel | MessageChannel sets a `MessageChannel` for the consumer When a message is received, it will be pushed to the channel for consumption| |
+| ReceiverQueueSize | ReceiverQueueSize sets the size of the consumer receive queue. | 1000 |
+| SubscriptionRolePrefix| SubscriptionRolePrefix set the subscription role prefix. | “reader” | 
+| ReadCompacted | If enabled, the reader will read messages from the compacted topic rather than reading the full message backlog of the topic.  ReadCompacted can only be enabled when reading from a persistent topic. | false|
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if _, err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+`SequenceID` | Set the sequence id to assign to the current message
+`DeliverAfter` | Request to deliver the message only after the specified relative delay
+`DeliverAt` | Deliver the message only at or after the specified absolute timestamp
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+```
+
+## OAuth2 authentication
+
+To use [OAuth2 authentication](security-oauth2.md), you'll need to configure your client to perform the following operations.
+This example shows how to configure OAuth2 authentication.
+
+```go
+oauth := pulsar.NewAuthenticationOAuth2(map[string]string{
+		"type":       "client_credentials",
+		"issuerUrl":  "https://dev-kt-aa9ne.us.auth0.com",
+		"audience":   "https://dev-kt-aa9ne.us.auth0.com/api/v2/",
+		"privateKey": "/path/to/privateKey",
+		"clientId":   "0Xx...Yyxeny",
+	})
+client, err := pulsar.NewClient(pulsar.ClientOptions{
+		URL:              "puslar://my-cluster:6650",
+		Authentication:   oauth,
+})
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.7.0/client-libraries-websocket.md
index f581416..168a167 100644
--- a/site2/website/versioned_docs/version-2.7.0/client-libraries-websocket.md
+++ b/site2/website/versioned_docs/version-2.7.0/client-libraries-websocket.md
@@ -7,6 +7,7 @@ original_id: client-libraries-websocket
 
 Pulsar [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API provides a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSocket, you can publish and consume messages and use features available on the [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
 
+
 > You can use Pulsar WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples).
 
 ## Running the WebSocket service
diff --git a/site2/website/versioned_docs/version-2.7.0/concepts-clients.md b/site2/website/versioned_docs/version-2.7.0/concepts-clients.md
new file mode 100644
index 0000000..ef5801d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/concepts-clients.md
@@ -0,0 +1,88 @@
+---
+id: version-2.7.0-concepts-clients
+title: Pulsar Clients
+sidebar_label: Clients
+original_id: concepts-clients
+---
+
+Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md),  [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications.
+
+Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff.
+
+> #### Custom client libraries
+> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol.md)
+
+
+## Client setup phase
+
+When an application wants to create a producer/consumer, the Pulsar client library will initiate a setup phase that is composed of two steps:
+
+1. The client will attempt to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata will know who is serving the topic or, in case nobody is serving it, will try to assign it to the least loaded broker.
+1. Once the client library has the broker address, it will create a TCP connection (or reuse an existing connection from the pool) and authenticate it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client will send a command to create producer/consumer to the broker, which will comply after having validated the authorization policy.
+
+Whenever the TCP connection breaks, the client will immediately re-initiate this setup phase and will keep trying with exponential backoff to re-establish the producer or consumer until the operation succeeds.
+
+## Reader interface
+
+In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they've been processed.  Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription will begin reading with the first message created afterwards.  Whenever a consumer  [...]
+
+The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with:
+
+* The **earliest** available message in the topic
+* The **latest** available message in the topic
+* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache.
+
+The reader interface is helpful for use cases like using Pulsar to provide effectively-once processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic.
+
+Internally, the reader interface is implemented as a consumer using an exclusive, non-durable subscription to the topic with a randomly-allocated name.
+
+[ **IMPORTANT** ]
+
+Unlike subscription/consumer, readers are non-durable in nature and will not prevent data in a topic from being deleted, thus it is ***strongly*** advised that [data retention](cookbooks-retention-expiry.md) be configured.   If data retention for a topic is not configured for an adequate amount of time, messages that the reader has not yet read might be deleted .  This will cause readers to essentially skip messages.  Configuring the data retention for a topic guarantees the reader with  [...]
+
+Please also note that a reader can have a "backlog", but the metric is just to allow users to know how behind the reader is and is not considered for any backlog quota calculations. 
+
+![The Pulsar consumer and reader interfaces](assets/pulsar-reader-consumer-interfaces.png)
+
+> ### Non-partitioned topics only
+> The reader interface for Pulsar cannot currently be used with [partitioned topics](concepts-messaging.md#partitioned-topics).
+
+Here's a Java example that begins reading from the earliest available message on a topic:
+
+```java
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.MessageId;
+import org.apache.pulsar.client.api.Reader;
+
+// Create a reader on a topic and for a specific message (and onward)
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic("reader-api-test")
+    .startMessageId(MessageId.earliest)
+    .create();
+
+while (true) {
+    Message message = reader.readNext();
+
+    // Process the message
+}
+```
+
+To create a reader that will read from the latest available message:
+
+```java
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(MessageId.latest)
+    .create();
+```
+
+To create a reader that will read from some message between earliest and latest:
+
+```java
+byte[] msgIdBytes = // Some byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(id)
+    .create();
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/concepts-overview.md b/site2/website/versioned_docs/version-2.7.0/concepts-overview.md
new file mode 100644
index 0000000..6cb5356
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/concepts-overview.md
@@ -0,0 +1,31 @@
+---
+id: version-2.7.0-concepts-overview
+title: Pulsar Overview
+sidebar_label: Overview
+original_id: concepts-overview
+---
+
+Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Pulsar was originally developed by Yahoo, it is under the stewardship of the [Apache Software Foundation](https://www.apache.org/).
+
+Key features of Pulsar are listed below:
+
+* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo.md) of messages across clusters.
+* Very low publish and end-to-end latency.
+* Seamless scalability to over a million topics.
+* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp.md).
+* Multiple [subscription modes](concepts-messaging.md#subscription-modes) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics.
+* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/).
+* A serverless light-weight computing framework [Pulsar Functions](functions-overview.md) offers the capability for stream-native data processing.
+* A serverless connector framework [Pulsar IO](io-overview.md), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar.
+* [Tiered Storage](concepts-tiered-storage.md) offloads data from hot/warm storage to cold/longterm storage (such as S3 and GCS) when the data is aging out.
+
+## Contents
+
+- [Messaging Concepts](concepts-messaging.md)
+- [Architecture Overview](concepts-architecture-overview.md)
+- [Pulsar Clients](concepts-clients.md)
+- [Geo Replication](concepts-replication.md)
+- [Multi Tenancy](concepts-multi-tenancy.md)
+- [Authentication and Authorization](concepts-authentication.md)
+- [Topic Compaction](concepts-topic-compaction.md)
+- [Tiered Storage](concepts-tiered-storage.md)
diff --git a/site2/website/versioned_docs/version-2.7.0/concepts-proxy-sni-routing.md b/site2/website/versioned_docs/version-2.7.0/concepts-proxy-sni-routing.md
new file mode 100644
index 0000000..1877960
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/concepts-proxy-sni-routing.md
@@ -0,0 +1,121 @@
+---
+id: version-2.7.0-concepts-proxy-sni-routing
+title: Proxy support with SNI routing
+sidebar_label: Proxy support with SNI routing
+original_id: concepts-proxy-sni-routing
+---
+
+## Pulsar Proxy with SNI routing
+A proxy server is an intermediary server that forwards requests from multiple clients to different servers across the Internet. The proxy server acts as a "traffic cop" in both forward and reverse proxy scenarios, and benefits your system such as load balancing, performance, security, auto-scaling, and so on.
+
+The proxy in Pulsar acts as a reverse proxy, and creates a gateway in front of brokers. Proxies such as Apache Traffic Server (ATS), HAProxy, Nginx, and Envoy are not supported in Pulsar. These proxy-servers support **SNI routing**. SNI routing is used to route traffic to a destination without terminating the SSL connection. Layer 4 routing provides greater transparency because the outbound connection is determined by examining the destination address in the client TCP packets.
+
+Pulsar clients support [SNI routing protocol](https://github.com/apache/pulsar/wiki/PIP-60:-Support-Proxy-server-with-SNI-routing), so you can connect to brokers through the proxy. This document walks you through how to set up the ATS proxy, enable SNI routing, and connect Pulsar client to the broker through the ATS proxy.
+
+### ATS-SNI Routing in Pulsar
+To support [layer-4 SNI routing](https://docs.trafficserver.apache.org/en/latest/admin-guide/layer-4-routing.en.html) with ATS, the inbound connection must be a TLS connection. Pulsar client supports SNI routing protocol on TLS connection, so when Pulsar clients connect to broker through ATS proxy, Pulsar uses ATS as a reverse proxy.
+
+Pulsar supports SNI routing for geo-replication, so brokers can connect to brokers in other clusters through the ATS proxy.
+
+This section explains how to set up and use ATS as a reverse proxy, so Pulsar clients can connect to brokers through the ATS proxy using the SNI routing protocol on TLS connection. 
+
+#### Set up ATS Proxy for layer-4 SNI routing
+To support layer 4 SNI routing, you need to configure the `records.conf` and `ssl_server_name.conf` files.
+
+![Pulsar client SNI](assets/pulsar-sni-client.png)
+
+The [records.config](https://docs.trafficserver.apache.org/en/latest/admin-guide/files/records.config.en.html) file is located in the `/usr/local/etc/trafficserver/` directory by default. The file lists configurable variables used by the ATS.
+
+To configure the `records.config` files, complete the following steps.
+1. Update TLS port (`http.server_ports`) on which proxy listens, and update proxy certs (`ssl.client.cert.path` and `ssl.client.cert.filename`) to secure TLS tunneling. 
+2. Configure server ports (`http.connect_ports`) used for tunneling to the broker. If Pulsar brokers are listening on `4443` and `6651` ports, add the brokers service port in the `http.connect_ports` configuration.
+
+The following is an example.
+
+```
+# PROXY TLS PORT
+CONFIG proxy.config.http.server_ports STRING 4443:ssl 4080
+# PROXY CERTS FILE PATH
+CONFIG proxy.config.ssl.client.cert.path STRING /proxy-cert.pem
+# PROXY KEY FILE PATH
+CONFIG proxy.config.ssl.client.cert.filename STRING /proxy-key.pem
+
+
+# The range of origin server ports that can be used for tunneling via CONNECT. # Traffic Server allows tunnels only to the specified ports. Supports both wildcards (*) and ranges (e.g. 0-1023).
+CONFIG proxy.config.http.connect_ports STRING 4443 6651
+```
+
+The [ssl_server_name](https://docs.trafficserver.apache.org/en/8.0.x/admin-guide/files/ssl_server_name.yaml.en.html) file is used to configure TLS connection handling for inbound and outbound connections. The configuration is determined by the SNI values provided by the inbound connection. The file consists of a set of configuration items, and each is identified by an SNI value (`fqdn`). When an inbound TLS connection is made, the SNI value from the TLS negotiation is matched with the it [...]
+
+The following example shows mapping of the inbound SNI hostname coming from the client, and the actual broker service URL where request should be redirected. For example, if the client sends the SNI header `pulsar-broker1`, the proxy creates a TLS tunnel by redirecting request to the `pulsar-broker1:6651` service URL.
+
+```
+server_config = {
+  {
+     fqdn = 'pulsar-broker-vip',
+     # Forward to Pulsar broker which is listening on 6651
+     tunnel_route = 'pulsar-broker-vip:6651'
+  },
+  {
+     fqdn = 'pulsar-broker1',
+     # Forward to Pulsar broker-1 which is listening on 6651
+     tunnel_route = 'pulsar-broker1:6651'
+  },
+  {
+     fqdn = 'pulsar-broker2',
+     # Forward to Pulsar broker-2 which is listening on 6651
+     tunnel_route = 'pulsar-broker2:6651'
+  },
+}
+```
+
+After you configure the `ssl_server_name.config` and `records.config` files, the ATS-proxy server handles SNI routing and creates TCP tunnel between the client and the broker.
+
+#### Configure Pulsar-client with SNI routing
+ATS SNI-routing works only with TLS. You need to enable TLS for the ATS proxy and brokers first, configure the SNI routing protocol, and then connect Pulsar clients to brokers through ATS proxy. Pulsar clients support SNI routing by connecting to the proxy, and sending the target broker URL to the SNI header. This process is processed internally. You only need to configure the following proxy configuration initially when you create a Pulsar client to use the SNI routing protocol.
+
+```
+String brokerServiceUrl = “pulsar+ssl://pulsar-broker-vip:6651/”;
+String proxyUrl = “pulsar+ssl://ats-proxy:443”;
+ClientBuilder clientBuilder = PulsarClient.builder()
+		.serviceUrl(brokerServiceUrl)
+        .tlsTrustCertsFilePath(TLS_TRUST_CERT_FILE_PATH)
+        .enableTls(true)
+        .allowTlsInsecureConnection(false)
+        .proxyServiceUrl(proxyUrl, ProxyProtocol.SNI)
+        .operationTimeout(1000, TimeUnit.MILLISECONDS);
+
+Map<String, String> authParams = new HashMap<>();
+authParams.put("tlsCertFile", TLS_CLIENT_CERT_FILE_PATH);
+authParams.put("tlsKeyFile", TLS_CLIENT_KEY_FILE_PATH);
+clientBuilder.authentication(AuthenticationTls.class.getName(), authParams);
+
+PulsarClient pulsarClient = clientBuilder.build();
+```
+
+#### Pulsar geo-replication with SNI routing
+You can use the ATS proxy for geo-replication. Pulsar brokers can connect to brokers in geo-replication by using SNI routing. To enable SNI routing for broker connection cross clusters, you need to configure SNI proxy URL to the cluster metadata. If you have configured SNI proxy URL in the cluster metadata, you can connect to broker cross clusters through the proxy over SNI routing.
+
+![Pulsar client SNI](assets/pulsar-sni-geo.png)
+
+In this example, a Pulsar cluster is deployed into two separate regions, `us-west` and `us-east`. Both regions are configured with ATS proxy, and brokers in each region run behind the ATS proxy. We configure the cluster metadata for both clusters, so brokers in one cluster can use SNI routing and connect to brokers in other clusters through the ATS proxy.
+
+(a) Configure the cluster metadata for `us-east` with `us-east` broker service URL and `us-east` ATS proxy URL with SNI proxy-protocol.
+
+```
+./pulsar-admin clusters update \
+--broker-url-secure pulsar+ssl://east-broker-vip:6651 \
+--url http://east-broker-vip:8080 \
+--proxy-protocol SNI \
+--proxy-url pulsar+ssl://east-ats-proxy:443
+```
+
+(b) Configure the cluster metadata for `us-west` with `us-west` broker service URL and `us-west` ATS proxy URL with SNI proxy-protocol.
+
+```
+./pulsar-admin clusters update \
+--broker-url-secure pulsar+ssl://west-broker-vip:6651 \
+--url http://west-broker-vip:8080 \
+--proxy-protocol SNI \
+--proxy-url pulsar+ssl://west-ats-proxy:443
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/concepts-replication.md b/site2/website/versioned_docs/version-2.7.0/concepts-replication.md
new file mode 100644
index 0000000..25f2347
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/concepts-replication.md
@@ -0,0 +1,9 @@
+---
+id: version-2.7.0-concepts-replication
+title: Geo Replication
+sidebar_label: Geo Replication
+original_id: concepts-replication
+---
+
+Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo.md) in Pulsar enables you to do that.
+
diff --git a/site2/website/versioned_docs/version-2.7.0/concepts-tiered-storage.md b/site2/website/versioned_docs/version-2.7.0/concepts-tiered-storage.md
new file mode 100644
index 0000000..ba21560
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/concepts-tiered-storage.md
@@ -0,0 +1,18 @@
+---
+id: version-2.7.0-concepts-tiered-storage
+title: Tiered Storage
+sidebar_label: Tiered Storage
+original_id: concepts-tiered-storage
+---
+
+Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time.
+
+One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed.
+
+![Tiered Storage](assets/pulsar-tiered-storage.png)
+
+> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data.
+
+Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default).
+
+> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage.md).
diff --git a/site2/website/versioned_docs/version-2.7.0/concepts-topic-compaction.md b/site2/website/versioned_docs/version-2.7.0/concepts-topic-compaction.md
new file mode 100644
index 0000000..65c574e
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/concepts-topic-compaction.md
@@ -0,0 +1,37 @@
+---
+id: version-2.7.0-concepts-topic-compaction
+title: Topic Compaction
+sidebar_label: Topic Compaction
+original_id: concepts-topic-compaction
+---
+
+Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time int [...]
+
+> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction.md).
+
+For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message assoc [...]
+
+Pulsar's topic compaction feature:
+
+* Allows for faster "rewind" through topic logs
+* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage)
+* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction.md)
+* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger.
+
+> #### Topic compaction example: the stock ticker
+> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be high [...]
+
+
+## How topic compaction works
+
+When topic compaction is triggered [via the CLI](cookbooks-compaction.md), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key.
+
+After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and con [...]
+
+After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur:
+
+* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either:
+  * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or
+  * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon)
+
+
diff --git a/site2/website/versioned_docs/version-2.7.0/cookbooks-bookkeepermetadata.md b/site2/website/versioned_docs/version-2.7.0/cookbooks-bookkeepermetadata.md
new file mode 100644
index 0000000..54597cd
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/cookbooks-bookkeepermetadata.md
@@ -0,0 +1,21 @@
+---
+id: version-2.7.0-cookbooks-bookkeepermetadata
+title: BookKeeper Ledger Metadata
+original_id: cookbooks-bookkeepermetadata
+---
+
+Pulsar stores data on BookKeeper ledgers, you can understand the contents of a ledger by inspecting the metadata attached to the ledger.
+Such metadata are stored on ZooKeeper and they are readable using BookKeeper APIs.
+
+Description of current metadata:
+
+| Scope  | Metadata name | Metadata value |
+| ------------- | ------------- | ------------- |
+| All ledgers  | application  | 'pulsar' |
+| All ledgers  | component  | 'managed-ledger', 'schema', 'compacted-topic' |
+| Managed ledgers | pulsar/managed-ledger | name of the ledger |
+| Cursor | pulsar/cursor | name of the cursor |
+| Compacted topic | pulsar/compactedTopic | name of the original topic |
+| Compacted topic | pulsar/compactedTo | id of the last compacted message |
+
+
diff --git a/site2/website/versioned_docs/version-2.7.0/cookbooks-encryption.md b/site2/website/versioned_docs/version-2.7.0/cookbooks-encryption.md
new file mode 100644
index 0000000..08a935a
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/cookbooks-encryption.md
@@ -0,0 +1,170 @@
+---
+id: version-2.7.0-cookbooks-encryption
+title: Pulsar Encryption
+sidebar_label: Encryption
+original_id: cookbooks-encryption
+---
+
+Pulsar encryption allows applications to encrypt messages at the producer and decrypt at the consumer. Encryption is performed using the public/private key pair configured by the application. Encrypted messages can only be decrypted by consumers with a valid key.
+
+## Asymmetric and symmetric encryption
+
+Pulsar uses dynamically generated symmetric AES key to encrypt messages(data). The AES key(data key) is encrypted using application provided ECDSA/RSA key pair, as a result there is no need to share the secret with everyone.
+
+Key is a public/private key pair used for encryption/decryption. The producer key is the public key, and the consumer key is the private key of the key pair.
+
+The application configures the producer with the public  key. This key is used to encrypt the AES data key. The encrypted data key is sent as part of message header. Only entities with the private key(in this case the consumer) will be able to decrypt the data key which is used to decrypt the message.
+
+A message can be encrypted with more than one key.  Any one of the keys used for encrypting the message is sufficient to decrypt the message
+
+Pulsar does not store the encryption key anywhere in the pulsar service. If you lose/delete the private key, your message is irretrievably lost, and is unrecoverable
+
+## Producer
+![alt text](assets/pulsar-encryption-producer.jpg "Pulsar Encryption Producer")
+
+## Consumer
+![alt text](assets/pulsar-encryption-consumer.jpg "Pulsar Encryption Consumer")
+
+## Here are the steps to get started:
+
+1. Create your ECDSA or RSA public/private key pair.
+
+```shell
+openssl ecparam -name secp521r1 -genkey -param_enc explicit -out test_ecdsa_privkey.pem
+openssl ec -in test_ecdsa_privkey.pem -pubout -outform pkcs8 -out test_ecdsa_pubkey.pem
+```
+2. Add the public and private key to the key management and configure your producers to retrieve public keys and consumers clients to retrieve private keys.
+3. Implement CryptoKeyReader::getPublicKey() interface from producer and CryptoKeyReader::getPrivateKey() interface from consumer, which will be invoked by Pulsar client to load the key.
+4. Add encryption key to producer configuration: conf.addEncryptionKey("myapp.key")
+5. Add CryptoKeyReader implementation to producer/consumer config: conf.setCryptoKeyReader(keyReader)
+6. Sample producer application:
+```java
+class RawFileKeyReader implements CryptoKeyReader {
+
+    String publicKeyFile = "";
+    String privateKeyFile = "";
+
+    RawFileKeyReader(String pubKeyFile, String privKeyFile) {
+        publicKeyFile = pubKeyFile;
+        privateKeyFile = privKeyFile;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPublicKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read public key from file " + publicKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPrivateKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read private key from file " + privateKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+}
+PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080");
+
+ProducerConfiguration prodConf = new ProducerConfiguration();
+prodConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"));
+prodConf.addEncryptionKey("myappkey");
+
+Producer producer = pulsarClient.createProducer("persistent://my-tenant/my-ns/my-topic", prodConf);
+
+for (int i = 0; i < 10; i++) {
+    producer.send("my-message".getBytes());
+}
+
+pulsarClient.close();
+```
+7. Sample Consumer Application:
+```java
+class RawFileKeyReader implements CryptoKeyReader {
+
+    String publicKeyFile = "";
+    String privateKeyFile = "";
+
+    RawFileKeyReader(String pubKeyFile, String privKeyFile) {
+        publicKeyFile = pubKeyFile;
+        privateKeyFile = privKeyFile;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPublicKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(publicKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read public key from file " + publicKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+
+    @Override
+    public EncryptionKeyInfo getPrivateKey(String keyName, Map<String, String> keyMeta) {
+        EncryptionKeyInfo keyInfo = new EncryptionKeyInfo();
+        try {
+            keyInfo.setKey(Files.readAllBytes(Paths.get(privateKeyFile)));
+        } catch (IOException e) {
+            System.out.println("ERROR: Failed to read private key from file " + privateKeyFile);
+            e.printStackTrace();
+        }
+        return keyInfo;
+    }
+}
+
+ConsumerConfiguration consConf = new ConsumerConfiguration();
+consConf.setCryptoKeyReader(new RawFileKeyReader("test_ecdsa_pubkey.pem", "test_ecdsa_privkey.pem"));
+PulsarClient pulsarClient = PulsarClient.create("http://localhost:8080");
+Consumer consumer = pulsarClient.subscribe("persistent://my-tenant//my-ns/my-topic", "my-subscriber-name", consConf);
+Message msg = null;
+
+for (int i = 0; i < 10; i++) {
+    msg = consumer.receive();
+    // do something
+    System.out.println("Received: " + new String(msg.getData()));
+}
+
+// Acknowledge the consumption of all messages at once
+consumer.acknowledgeCumulative(msg);
+pulsarClient.close();
+```
+
+## Key rotation
+Pulsar generates new AES data key every 4 hours or after a certain number of messages are published. The asymmetric public key is automatically fetched by producer every 4 hours by calling CryptoKeyReader::getPublicKey() to retrieve the latest version.
+
+## Enabling encryption at the producer application:
+If you produce messages that are consumed across application boundaries, you need to ensure that consumers in other applications have access to one of the private keys that can decrypt the messages.  This can be done in two ways:
+1. The consumer application provides you access to their public key, which you add to your producer keys
+1. You grant access to one of the private keys from the pairs used by producer 
+
+In some cases, the producer may want to encrypt the messages with multiple keys. For this, add all such keys to the config. Consumer will be able to decrypt the message, as long as it has access to at least one of the keys.
+
+E.g: If messages needs to be encrypted using 2 keys myapp.messagekey1 and myapp.messagekey2,
+```java
+conf.addEncryptionKey("myapp.messagekey1");
+conf.addEncryptionKey("myapp.messagekey2");
+```
+## Decrypting encrypted messages at the consumer application:
+Consumers require access one of the private keys to decrypt messages produced by the producer. If you would like to receive encrypted messages, create a public/private key and give your public key to the producer application to encrypt messages using your public key.
+
+## Handling Failures:
+* Producer/ Consumer loses access to the key
+  * Producer action will fail indicating the cause of the failure. Application has the option to proceed with sending unencrypted message in such cases. Call conf.setCryptoFailureAction(ProducerCryptoFailureAction) to control the producer behavior. The default behavior is to fail the request.
+  * If consumption failed due to decryption failure or missing keys in consumer, application has the option to consume the encrypted message or discard it. Call conf.setCryptoFailureAction(ConsumerCryptoFailureAction) to control the consumer behavior. The default behavior is to fail the request.
+Application will never be able to decrypt the messages if the private key is permanently lost.
+* Batch messaging
+  * If decryption fails and the message contain batch messages, client will not be able to retrieve individual messages in the batch, hence message consumption fails even if conf.setCryptoFailureAction() is set to CONSUME.
+* If decryption fails, the message consumption stops and application will notice backlog growth in addition to decryption failure messages in the client log. If application does not have access to the private key to decrypt the message, the only option is to skip/discard backlogged messages. 
+
diff --git a/site2/website/versioned_docs/version-2.7.0/cookbooks-message-queue.md b/site2/website/versioned_docs/version-2.7.0/cookbooks-message-queue.md
new file mode 100644
index 0000000..1082982
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/cookbooks-message-queue.md
@@ -0,0 +1,95 @@
+---
+id: version-2.7.0-cookbooks-message-queue
+title: Using Pulsar as a message queue
+sidebar_label: Message queue
+original_id: cookbooks-message-queue
+---
+
+Message queues are essential components of many large-scale data architectures. If every single work object that passes through your system absolutely *must* be processed in spite of the slowness or downright failure of this or that system component, there's a good chance that you'll need a message queue to step in and ensure that unprocessed data is retained---with correct ordering---until the required actions are taken.
+
+Pulsar is a great choice for a message queue because:
+
+* it was built with [persistent message storage](concepts-architecture-overview.md#persistent-storage) in mind
+* it offers automatic load balancing across [consumers](reference-terminology.md#consumer) for messages on a topic (or custom load balancing if you wish)
+
+> You can use the same Pulsar installation to act as a real-time message bus and as a message queue if you wish (or just one or the other). You can set aside some topics for real-time purposes and other topics for message queue purposes (or use specific namespaces for either purpose if you wish).
+
+
+# Client configuration changes
+
+To use a Pulsar [topic](reference-terminology.md#topic) as a message queue, you should distribute the receiver load on that topic across several consumers (the optimal number of consumers will depend on the load). Each consumer must:
+
+* Establish a [shared subscription](concepts-messaging.md#shared) and use the same subscription name as the other consumers (otherwise the subscription is not shared and the consumers can't act as a processing ensemble)
+* If you'd like to have tight control over message dispatching across consumers, set the consumers' **receiver queue** size very low (potentially even to 0 if necessary). Each Pulsar [consumer](reference-terminology.md#consumer) has a receiver queue that determines how many messages the consumer will attempt to fetch at a time. A receiver queue of 1000 (the default), for example, means that the consumer will attempt to process 1000 messages from the topic's backlog upon connection. Setti [...]
+
+   The downside to restricting the receiver queue size of consumers is that that limits the potential throughput of those consumers and cannot be used with [partitioned topics](reference-terminology.md#partitioned-topic). Whether the performance/control trade-off is worthwhile will depend on your use case.
+
+## Java clients
+
+Here's an example Java consumer configuration that uses a shared subscription:
+
+```java
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.SubscriptionType;
+
+String SERVICE_URL = "pulsar://localhost:6650";
+String TOPIC = "persistent://public/default/mq-topic-1";
+String subscription = "sub-1";
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl(SERVICE_URL)
+        .build();
+
+Consumer consumer = client.newConsumer()
+        .topic(TOPIC)
+        .subscriptionName(subscription)
+        .subscriptionType(SubscriptionType.Shared)
+        // If you'd like to restrict the receiver queue size
+        .receiverQueueSize(10)
+        .subscribe();
+```
+
+## Python clients
+
+Here's an example Python consumer configuration that uses a shared subscription:
+
+```python
+from pulsar import Client, ConsumerType
+
+SERVICE_URL = "pulsar://localhost:6650"
+TOPIC = "persistent://public/default/mq-topic-1"
+SUBSCRIPTION = "sub-1"
+
+client = Client(SERVICE_URL)
+consumer = client.subscribe(
+    TOPIC,
+    SUBSCRIPTION,
+    # If you'd like to restrict the receiver queue size
+    receiver_queue_size=10,
+    consumer_type=ConsumerType.Shared)
+```
+
+## C++ clients
+
+Here's an example C++ consumer configuration that uses a shared subscription:
+
+```cpp
+#include <pulsar/Client.h>
+
+std::string serviceUrl = "pulsar://localhost:6650";
+std::string topic = "persistent://public/defaultmq-topic-1";
+std::string subscription = "sub-1";
+
+Client client(serviceUrl);
+
+ConsumerConfiguration consumerConfig;
+consumerConfig.setConsumerType(ConsumerType.ConsumerShared);
+// If you'd like to restrict the receiver queue size
+consumerConfig.setReceiverQueueSize(10);
+
+Consumer consumer;
+
+Result result = client.subscribe(topic, subscription, consumerConfig, consumer);
+```
+
diff --git a/site2/website/versioned_docs/version-2.7.0/cookbooks-retention-expiry.md b/site2/website/versioned_docs/version-2.7.0/cookbooks-retention-expiry.md
index 9a5cd2f..a32211e 100644
--- a/site2/website/versioned_docs/version-2.7.0/cookbooks-retention-expiry.md
+++ b/site2/website/versioned_docs/version-2.7.0/cookbooks-retention-expiry.md
@@ -316,3 +316,24 @@ $ pulsar-admin namespaces get-message-ttl my-tenant/my-ns
 admin.namespaces().getNamespaceMessageTTL(namespace)
 ```
 
+### Remove the TTL configuration for a namespace
+
+#### pulsar-admin
+
+Use the [`remove-message-ttl`](reference-pulsar-admin.md#pulsar-admin-namespaces-remove-message-ttl) subcommand and specify a namespace.
+
+##### Example
+
+```shell
+$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns
+```
+
+#### REST API
+
+{@inject: endpoint|DELETE|/admin/v2/namespaces/:tenant/:namespace/messageTTL|operation/removeNamespaceMessageTTL}
+
+#### Java
+
+```java
+admin.namespaces().removeNamespaceMessageTTL(namespace)
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.7.0/deploy-bare-metal-multi-cluster.md
new file mode 100644
index 0000000..1542293
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/deploy-bare-metal-multi-cluster.md
@@ -0,0 +1,426 @@
+---
+id: version-2.7.0-deploy-bare-metal-multi-cluster
+title: Deploying a multi-cluster on bare metal
+sidebar_label: Bare metal multi-cluster
+original_id: deploy-bare-metal-multi-cluster
+---
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using it in a startup or on a single team, you had better opt for a single cluster. For instructions on deploying a single cluster,
+> see the guide [here](deploy-bare-metal.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and install `apache-pulsar-io-connectors` under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and install `apache-pulsar-offloaders` under `offloaders` directory in the pulsar directory on every broker node. For more details of how to configure
+> this feature, you can refer to the [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+A Pulsar *instance* consists of multiple Pulsar clusters working in unison. You can distribute clusters across data centers or geographical regions and replicate the clusters amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps:
+
+* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a [local](#deploy-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks
+* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster
+* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each Pulsar cluster
+* Deploying [brokers](#deploy-brokers) in each Pulsar cluster
+
+If you want to deploy a single Pulsar cluster, see [Clusters and Brokers](getting-started-standalone.md#start-the-cluster).
+
+> #### Run Pulsar locally or on Kubernetes?
+> This guide shows you how to deploy Pulsar in production in a non-Kubernetes environment. If you want to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you want to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pul [...]
+
+## System requirement
+Pulsar is currently available for **MacOS** and **Linux**. In order to use Pulsar, you need to install Java 8 from [Oracle download center](http://www.oracle.com/).
+
+## Install Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-{{pulsar:version}}/apache-pulsar-{{pulsar:version}}-bin.tar.gz' -O apache-pulsar-{{pulsar:version}}-bin.tar.gz
+  ```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+## What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | [Command-line tools](reference-cli-tools.md) of Pulsar, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md)
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that Pulsar uses 
+`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase
+
+The following directories are created once you begin running Pulsar:
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`instances` | Artifacts created for [Pulsar Functions](functions-overview.md)
+`logs` | Logs that the installation creates
+
+
+## Deploy ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-the-configuration-store) operates at the instance level and provides configuration management for the entire system (and thus across clusters). An independent cluster of machines or the same machines that local ZooKeeper uses can provide the configuration store quorum.
+
+The configuration store quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper.
+
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and configuration-related tasks for Pulsar.
+
+You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for deploying a Pulsar instance. 
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. The following is an example for a three-node cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+On each host, you need to specify the ID of the node in the `myid` file of each node, which is in `data/zookeeper` folder of each server by default (you can change the file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed information on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
+
+```shell
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command looks like `echo 2 > data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start zookeeper
+```
+
+### Deploy the configuration store 
+
+The ZooKeeper cluster that is configured and started up in the section above is a *local* ZooKeeper cluster that you can use to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, you do not need a separate cluster for the configuration store. If, however, you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you should stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but run on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers that the local quorom uses to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). The following is an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across different geographical regions, the configuration store serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, you can
+share the same hosts used for the local ZooKeeper quorum.
+
+For example, assume a Pulsar instance with the following clusters `us-west`,
+`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster has its own local ZK servers named such as the following: 
+
+```
+zk[1-3].${CLUSTER}.example.com
+```
+
+In this scenario if you want to pick the quorum participants from few clusters and
+let all the others be ZK observers. For example, to form a 7 servers quorum, you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This method guarantees that writes to configuration store is possible even if one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+```
+
+Additionally, ZK observers need to have the following parameters:
+
+```properties
+peerType=observer
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+$ bin/pulsar-daemon start configuration-store
+```
+
+## Cluster metadata initialization
+
+Once you set up the cluster-specific ZooKeeper and configuration store quorums for your instance, you need to write some metadata to ZooKeeper for each cluster in your instance. **you only needs to write these metadata once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The following is an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster us-west \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2184 \
+  --web-service-url http://pulsar.us-west.example.com:8080/ \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+```
+
+As you can see from the example above, you need to specify the following:
+
+* The name of the cluster
+* The local ZooKeeper connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
+
+If you use [TLS](security-tls-transport.md), you also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster.
+
+Make sure to run `initialize-cluster-metadata` for each cluster in your instance.
+
+## Deploy BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configure bookies
+
+You can configure BookKeeper bookies using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the local ZooKeeper of Pulsar cluster.
+
+### Start bookies
+
+You can start a bookie in two ways: in the foreground or as a background daemon.
+
+To start a bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+```shell
+$ bin/bookkeeper shell bookiesanity
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, reads them back and finally deletes the ledger.
+
+After you have started all bookies, you can use the `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify that all bookies in the cluster are running.
+
+```bash
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+```
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, having a suitable hardware configuration is essential for the bookies. The following are key dimensions for bookie hardware capacity.
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is
+designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is critical. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers acknowledge the message. Writes happen in the background, so write I/O is not a big concern. Reads happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration involves multiple HDDs with a RAID controller.
+
+
+
+## Deploy brokers
+
+Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper bookies, you can deploy brokers.
+
+### Broker configuration
+
+You can configure brokers using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
+
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the configuration store quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you need to specify only those  [...]
+
+You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. In addition, you need to match the broker and web service ports provided when you initialize the metadata (especially when you use a different port from default) of the cluster.
+
+The following is an example configuration:
+
+```properties
+# Local ZooKeeper servers
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Configuration store quorum connection string.
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+clusterName=us-west
+
+# Broker data port
+brokerServicePort=6650
+
+# Broker data port for TLS
+brokerServicePortTls=6651
+
+# Port to use to server HTTP request
+webServicePort=8080
+
+# Port to use to server HTTPS request
+webServicePortTls=8443
+```
+
+### Broker hardware
+
+Pulsar brokers do not require any special hardware since they do not use the local disk. You had better choose fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the software can take full advantage of that.
+
+### Start the broker service
+
+You can start a broker in the background by using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start broker
+```
+
+You can also start brokers in the foreground by using [`pulsar broker`](reference-cli-tools.md#broker):
+
+```shell
+$ bin/pulsar broker
+```
+
+## Service discovery
+
+[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup).
+
+You can also use your own service discovery system if you want. If you use your own system, you only need to satisfy just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
+
+> #### Service discovery already provided by many scheduling systems
+> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes), have service discovery systems built in. If you run Pulsar on such a system, you may not need to provide your own service discovery mechanism.
+
+
+### Service discovery setup
+
+The service discovery mechanism that included with Pulsar maintains a list of active brokers, which stored in ZooKeeper, and supports lookup using HTTP and also the [binary protocol](developing-binary-protocol.md) of Pulsar.
+
+To get started setting up the built-in service of discovery of Pulsar, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the ZooKeeper quorum connection string of the cluster and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [con [...]
+store](reference-terminology.md#configuration-store) quorum connection string.
+
+```properties
+# Zookeeper quorum connection string
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Global configuration store connection string
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+```
+
+To start the discovery service:
+
+```shell
+$ bin/pulsar-daemon start discovery
+```
+
+## Admin client and verification
+
+At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients.
+
+The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster:
+
+```properties
+serviceUrl=http://pulsar.us-west.example.com:8080/
+```
+
+## Provision new tenants
+
+Pulsar is built as a fundamentally multi-tenant system.
+
+
+If a new tenant wants to use the system, you need to create a new one. You can create a new tenant by using the [`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool:
+
+
+```shell
+$ bin/pulsar-admin tenants create test-tenant \
+  --allowed-clusters us-west \
+  --admin-roles test-admin-role
+```
+
+In this command, users who identify with `test-admin-role` role can administer the configuration for the `test-tenant` tenant. The `test-tenant` tenant can only use the `us-west` cluster. From now on, this tenant can manage its resources.
+
+Once you create a tenant, you need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant.
+
+
+The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant.
+
+```shell
+$ bin/pulsar-admin namespaces create test-tenant/ns1
+```
+
+##### Test producer and consumer
+
+
+Everything is now ready to send and receive messages. The quickest way to test the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) client tool.
+
+
+You can use a topic in the namespace that you have just created. Topics are automatically created the first time when a producer or a consumer tries to use them.
+
+The topic name in this case could be:
+
+```http
+persistent://test-tenant/ns1/my-topic
+```
+
+Start a consumer that creates a subscription on the topic and waits for messages:
+
+```shell
+$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic
+```
+
+Start a producer that publishes messages at a fixed rate and reports stats every 10 seconds:
+
+```shell
+$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic
+```
+
+To report the topic stats:
+
+```shell
+$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/deploy-dcos.md b/site2/website/versioned_docs/version-2.7.0/deploy-dcos.md
new file mode 100644
index 0000000..181a817
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/deploy-dcos.md
@@ -0,0 +1,183 @@
+---
+id: version-2.7.0-deploy-dcos
+title: Deploy Pulsar on DC/OS
+sidebar_label: DC/OS
+original_id: deploy-dcos
+---
+
+> ### Tips
+>
+> If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of
+> `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter <strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that [Mesosphere](https://mesosphere.com/) creates and maintains .
+
+Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets.
+
+## Prerequisites
+
+In order to run Pulsar on DC/OS, you need the following:
+
+* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher
+* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes
+* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed
+* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo.
+
+  ```bash
+  $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json
+  ```
+
+Each node in the DC/OS-managed Mesos cluster must have at least:
+
+* 4 CPU
+* 4 GB of memory
+* 60 GB of total persistent disk
+
+Alternatively, you can change the configuration in `PulsarGroups.json` according to match your resources of DC/OS cluster.
+
+## Deploy Pulsar using the DC/OS command interface
+
+You can deploy Pulsar on DC/OS using this command:
+
+```bash
+$ dcos marathon group add PulsarGroups.json
+```
+
+This command deploys Docker container instances in three groups, which together comprise a Pulsar cluster:
+
+* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance)
+* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance)
+* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance
+
+
+> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, thus you do not have to install or start up ZooKeeper separately.
+
+After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying.
+
+![DC/OS command executed](assets/dcos_command_execute.png)
+
+![DC/OS command executed2](assets/dcos_command_execute2.png)
+
+## The BookKeeper group
+
+To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group.
+
+![DC/OS bookkeeper status](assets/dcos_bookkeeper_status.png)
+
+At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that the bookies have been deployed successfully and are now running.
+ 
+![DC/OS bookkeeper running](assets/dcos_bookkeeper_run.png)
+ 
+You can also click into each bookie instance to get more detailed information, such as the bookie running log.
+
+![DC/OS bookie log](assets/dcos_bookie_log.png)
+
+To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 bookies are under the `available` directory.
+
+![DC/OS bookkeeper in zk](assets/dcos_bookkeeper_in_zookeeper.png)
+
+## The Pulsar broker Group
+
+Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers.
+
+![DC/OS broker status](assets/dcos_broker_status.png)
+
+![DC/OS broker running](assets/dcos_broker_run.png)
+
+You can also click into each broker instance to get more detailed information, such as the broker running log.
+
+![DC/OS broker log](assets/dcos_broker_log.png)
+
+Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that the `loadbalance` and `managed-ledgers` directories have been created.
+
+![DC/OS broker in zk](assets/dcos_broker_in_zookeeper.png)
+
+## Monitor Group
+
+The **monitory** group consists of Prometheus and Grafana.
+
+![DC/OS monitor status](assets/dcos_monitor_status.png)
+
+### Prometheus
+
+Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example.
+
+![DC/OS prom endpoint](assets/dcos_prom_endpoint.png)
+
+If you click that endpoint, you can see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL display all the bookies and brokers.
+
+![DC/OS prom targets](assets/dcos_prom_targets.png)
+
+### Grafana
+
+Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example.
+ 
+![DC/OS grafana endpoint](assets/dcos_grafana_endpoint.png)
+
+If you click that endpoint, you can access the Grafana dashbaord.
+
+![DC/OS grafana targets](assets/dcos_grafana_dashboard.png)
+
+## Run a simple Pulsar consumer and producer on DC/OS
+
+Now that you have a fully deployed Pulsar cluster, you can run a simple consumer and producer to show Pulsar on DC/OS in action.
+
+### Download and prepare the Pulsar Java tutorial
+
+You can clone a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo contains a simple Pulsar consumer and producer (you can find more information in the `README` file of the repo).
+
+```bash
+$ git clone https://github.com/streamlio/pulsar-java-tutorial
+```
+
+Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java).
+The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. The client agent IP address can also replace this.
+
+Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it can produce more messages.
+
+Now compile the project code using the command below:
+
+```bash
+$ mvn clean package
+```
+
+### Run the consumer and producer
+
+Execute this command to run the consumer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial"
+```
+
+Execute this command to run the producer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial"
+```
+
+You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI.
+
+![DC/OS pulsar producer](assets/dcos_producer.png)
+
+![DC/OS pulsar consumer](assets/dcos_consumer.png)
+
+### View Grafana metric output
+
+While the producer and consumer run, you can access running metrics information from Grafana.
+
+![DC/OS pulsar dashboard](assets/dcos_metrics.png)
+
+
+## Uninstall Pulsar
+
+You can shut down and uninstall the `pulsar` application from DC/OS at any time in the following two ways:
+
+1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group.
+
+    ![DC/OS pulsar uninstall](assets/dcos_uninstall.png)
+
+2. You can use the following command:
+
+    ```bash
+    $ dcos marathon group remove /pulsar
+    ```
diff --git a/site2/website/versioned_docs/version-2.7.0/deploy-kubernetes.md b/site2/website/versioned_docs/version-2.7.0/deploy-kubernetes.md
new file mode 100644
index 0000000..7d2fee5
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/deploy-kubernetes.md
@@ -0,0 +1,11 @@
+---
+id: version-2.7.0-deploy-kubernetes
+title: Deploy Pulsar on Kubernetes
+sidebar_label: Kubernetes
+original_id: deploy-kubernetes
+---
+
+To get up and running with these charts as fast as possible, in a **non-production** use case, we provide
+a [quick start guide](getting-started-helm.md) for Proof of Concept (PoC) deployments.
+
+To configure and install a Pulsar cluster on Kubernetes for production usage, follow the complete [Installation Guide](helm-install.md).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/developing-cpp.md b/site2/website/versioned_docs/version-2.7.0/developing-cpp.md
new file mode 100644
index 0000000..a1b7f7d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/developing-cpp.md
@@ -0,0 +1,101 @@
+---
+id: version-2.7.0-develop-cpp
+title: Building Pulsar C++ client
+sidebar_label: Building Pulsar C++ client
+original_id: develop-cpp
+---
+
+## Supported platforms
+
+The Pulsar C++ client has been successfully tested on **MacOS** and **Linux**.
+
+## System requirements
+
+You need to have the following installed to use the C++ client:
+
+* [CMake](https://cmake.org/)
+* [Boost](http://www.boost.org/)
+* [Protocol Buffers](https://developers.google.com/protocol-buffers/) 2.6
+* [Log4CXX](https://logging.apache.org/log4cxx)
+* [libcurl](https://curl.haxx.se/libcurl/)
+* [Google Test](https://github.com/google/googletest)
+* [JsonCpp](https://github.com/open-source-parsers/jsoncpp)
+
+## Compilation
+
+There are separate compilation instructions for [MacOS](#macos) and [Linux](#linux). For both systems, start by cloning the Pulsar repository:
+
+```shell
+$ git clone https://github.com/apache/pulsar
+```
+
+### Linux
+
+First, install all of the necessary dependencies:
+
+```shell
+$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \
+  libprotobuf-dev protobuf-compiler libboost-all-dev google-mock libgtest-dev libjsoncpp-dev
+```
+
+Then compile and install [Google Test](https://github.com/google/googletest):
+
+```shell
+# libgtest-dev version is 1.18.0 or above
+$ cd /usr/src/googletest
+$ sudo cmake .
+$ sudo make
+$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/
+
+# less than 1.18.0
+$ cd /usr/src/gtest
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgtest.a /usr/lib
+
+$ cd /usr/src/gmock
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgmock.a /usr/lib
+```
+
+Finally, compile the Pulsar client library for C++ inside the Pulsar repo:
+
+```shell
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+```
+
+The resulting files, `libpulsar.so` and `libpulsar.a`, will be placed in the `lib` folder of the repo while two tools, `perfProducer` and `perfConsumer`, will be placed in the `perf` directory.
+
+### MacOS
+
+First, install all of the necessary dependencies:
+
+```shell
+# OpenSSL installation
+$ brew install openssl
+$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/
+$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/
+
+# Protocol Buffers installation
+$ brew tap homebrew/versions
+$ brew install protobuf260
+$ brew install boost
+$ brew install log4cxx
+
+# Google Test installation
+$ git clone https://github.com/google/googletest.git
+$ cd googletest
+$ cmake .
+$ make install
+```
+
+Then compile the Pulsar client library in the repo that you cloned:
+
+```shell
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/developing-load-manager.md b/site2/website/versioned_docs/version-2.7.0/developing-load-manager.md
new file mode 100644
index 0000000..6c990f6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/developing-load-manager.md
@@ -0,0 +1,215 @@
+---
+id: version-2.7.0-develop-load-manager
+title: Modular load manager
+sidebar_label: Modular load manager
+original_id: develop-load-manager
+---
+
+The *modular load manager*, implemented in  [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load  [...]
+
+## Usage
+
+There are two ways that you can enable the modular load manager:
+
+1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`.
+2. Using the `pulsar-admin` tool. Here's an example:
+
+   ```shell
+   $ pulsar-admin update-dynamic-config \
+     --config loadManagerClassName \
+     --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
+   ```
+
+   You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`.
+
+## Verification
+
+There are a few different ways to determine which load manager is being used:
+
+1. Use `pulsar-admin` to examine the `loadManagerClassName` element:
+
+    ```shell
+   $ bin/pulsar-admin brokers get-all-dynamic-config
+   {
+     "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl"
+   }
+   ```
+
+   If there is no `loadManagerClassName` element, then the default load manager is used.
+
+2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager:
+
+    ```json
+    {
+      "bandwidthIn": {
+        "limit": 10240000.0,
+        "usage": 4.256510416666667
+      },
+      "bandwidthOut": {
+        "limit": 10240000.0,
+        "usage": 5.287239583333333
+      },
+      "bundles": [],
+      "cpu": {
+        "limit": 2400.0,
+        "usage": 5.7353247655435915
+      },
+      "directMemory": {
+        "limit": 16384.0,
+        "usage": 1.0
+      }
+    }
+    ```
+
+    With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this:
+
+    ```json
+    {
+      "systemResourceUsage": {
+        "bandwidthIn": {
+          "limit": 10240000.0,
+          "usage": 0.0
+        },
+        "bandwidthOut": {
+          "limit": 10240000.0,
+          "usage": 0.0
+        },
+        "cpu": {
+          "limit": 2400.0,
+          "usage": 0.0
+        },
+        "directMemory": {
+          "limit": 16384.0,
+          "usage": 1.0
+        },
+        "memory": {
+          "limit": 8192.0,
+          "usage": 3903.0
+        }
+      }
+    }
+    ```
+
+3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used.
+
+    Here is an example from the modular load manager:
+
+    ```
+    ===================================================================================================================
+    ||SYSTEM         |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.00           |48.33          |0.01           |0.00           |0.00           |48.33          ||
+    ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+    ||               |4              |4              |0              |2              |4              |0              ||
+    ||LATEST         |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ||SHORT          |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ||LONG           |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ===================================================================================================================
+    ```
+
+    Here is an example from the simple load manager:
+
+    ```
+    ===================================================================================================================
+    ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+    ||               |4              |4              |0              |2              |0              |0              ||
+    ||RAW SYSTEM     |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.25           |47.94          |0.01           |0.00           |0.00           |47.94          ||
+    ||ALLOC SYSTEM   |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.20           |1.89           |               |1.27           |3.21           |3.21           ||
+    ||RAW MSG        |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.01           |0.01           |0.01           ||
+    ||ALLOC MSG      |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |54.84          |134.48         |189.31         |126.54         |320.96         |447.50         ||
+    ===================================================================================================================
+    ```
+
+It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper.
+
+## Implementation
+
+### Data
+
+The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class.
+Here, the available data is subdivided into the bundle data and the broker data.
+
+#### Broker
+
+The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts,
+one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker
+data which is written to ZooKeeper by the leader broker.
+
+##### Local Broker Data
+The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources:
+
+* CPU usage
+* JVM heap memory usage
+* Direct memory usage
+* Bandwidth in/out usage
+* Most recent total message rate in/out across all bundles
+* Total number of topics, bundles, producers, and consumers
+* Names of all bundles assigned to this broker
+* Most recent changes in bundle assignments for this broker
+
+The local broker data is updated periodically according to the service configuration
+"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will
+receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node
+`/loadbalance/brokers/<broker host/port>`
+
+##### Historical Broker Data
+
+The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class.
+
+In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information:
+
+* Message rate in/out for the entire broker
+* Message throughput in/out for the entire broker
+
+Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained.
+
+The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+##### Bundle Data
+
+The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame:
+
+* Message rate in/out for this bundle
+* Message Throughput In/Out for this bundle
+* Current number of samples for this bundle
+
+The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where
+the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval
+for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the
+short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term
+data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame,
+the average is taken only over the existing samples. When no samples are available, default values are assumed until
+they are overwritten by the first sample. Currently, the default values are
+
+* Message rate in/out: 50 messages per second both ways
+* Message throughput in/out: 50KB per second both ways
+
+The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper.
+Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical
+broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+### Traffic Distribution
+
+The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](h [...]
+
+#### Least Long Term Message Rate Strategy
+
+As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that
+the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based
+on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system
+resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the
+assignment process. This is done by weighting the final message rate according to
+`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration
+`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources
+that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed
+by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded,
+then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload
+threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly
+assigned.
+
diff --git a/site2/website/versioned_docs/version-2.7.0/developing-tools.md b/site2/website/versioned_docs/version-2.7.0/developing-tools.md
new file mode 100644
index 0000000..fa094be
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/developing-tools.md
@@ -0,0 +1,106 @@
+---
+id: version-2.7.0-develop-tools
+title: Simulation tools
+sidebar_label: Simulation tools
+original_id: develop-tools
+---
+
+It is sometimes necessary create an test environment and incur artificial load to observe how well load managers
+handle the load. The load simulation controller, the load simulation client, and the broker monitor were created as an
+effort to make create this load and observe the effects on the managers more easily.
+
+## Simulation Client
+The simulation client is a machine which will create and subscribe to topics with configurable message rates and sizes.
+Because it is sometimes necessary in simulating large load to use multiple client machines, the user does not interact
+with the simulation client directly, but instead delegates their requests to the simulation controller, which will then
+send signals to clients to start incurring load. The client implementation is in the class
+`org.apache.pulsar.testclient.LoadSimulationClient`.
+
+### Usage
+To Start a simulation client, use the `pulsar-perf` script with the command `simulation-client` as follows:
+
+```
+pulsar-perf simulation-client --port <listen port> --service-url <pulsar service url>
+```
+
+The client will then be ready to receive controller commands.
+## Simulation Controller
+The simulation controller send signals to the simulation clients, requesting them to create new topics, stop old
+topics, change the load incurred by topics, as well as several other tasks. It is implemented in the class
+`org.apache.pulsar.testclient.LoadSimulationController` and presents a shell to the user as an interface to send
+command with.
+
+### Usage
+To start a simulation controller, use the `pulsar-perf` script with the command `simulation-controller` as follows:
+
+```
+pulsar-perf simulation-controller --cluster <cluster to simulate on> --client-port <listen port for clients>
+--clients <comma-separated list of client host names>
+```
+
+The clients should already be started before the controller is started. You will then be presented with a simple prompt,
+where you can issue commands to simulation clients. Arguments often refer to tenant names, namespace names, and topic
+names. In all cases, the BASE name of the tenants, namespaces, and topics are used. For example, for the topic
+`persistent://my_tenant/my_cluster/my_namespace/my_topic`, the tenant name is `my_tenant`, the namespace name is
+`my_namespace`, and the topic name is `my_topic`. The controller can perform the following actions:
+
+* Create a topic with a producer and a consumer
+    * `trade <tenant> <namespace> <topic> [--rate <message rate per second>]
+    [--rand-rate <lower bound>,<upper bound>]
+    [--size <message size in bytes>]`
+* Create a group of topics with a producer and a consumer
+    * `trade_group <tenant> <group> <num_namespaces> [--rate <message rate per second>]
+    [--rand-rate <lower bound>,<upper bound>]
+    [--separation <separation between creating topics in ms>] [--size <message size in bytes>]
+    [--topics-per-namespace <number of topics to create per namespace>]`
+* Change the configuration of an existing topic
+    * `change <tenant> <namespace> <topic> [--rate <message rate per second>]
+    [--rand-rate <lower bound>,<upper bound>]
+    [--size <message size in bytes>]`
+* Change the configuration of a group of topics
+    * `change_group <tenant> <group> [--rate <message rate per second>] [--rand-rate <lower bound>,<upper bound>]
+    [--size <message size in bytes>] [--topics-per-namespace <number of topics to create per namespace>]`
+* Shutdown a previously created topic
+    * `stop <tenant> <namespace> <topic>`
+* Shutdown a previously created group of topics
+    * `stop_group <tenant> <group>`
+* Copy the historical data from one ZooKeeper to another and simulate based on the message rates and sizes in that
+history
+    * `copy <tenant> <source zookeeper> <target zookeeper> [--rate-multiplier value]`
+* Simulate the load of the historical data on the current ZooKeeper (should be same ZooKeeper being simulated on)
+    * `simulate <tenant> <zookeeper> [--rate-multiplier value]`
+* Stream the latest data from the given active ZooKeeper to simulate the real-time load of that ZooKeeper.
+    * `stream <tenant> <zookeeper> [--rate-multiplier value]`
+
+The "group" arguments in these commands allow the user to create or affect multiple topics at once. Groups are created
+when calling the `trade_group` command, and all topics from these groups may be subsequently modified or stopped
+with the `change_group` and `stop_group` commands respectively. All ZooKeeper arguments are of the form
+`zookeeper_host:port`.
+
+### Difference Between Copy, Simulate, and Stream
+The commands `copy`, `simulate`, and `stream` are very similar but have significant differences. `copy` is used when
+you want to simulate the load of a static, external ZooKeeper on the ZooKeeper you are simulating on. Thus,
+`source zookeeper` should be the ZooKeeper you want to copy and `target zookeeper` should be the ZooKeeper you are
+simulating on, and then it will get the full benefit of the historical data of the source in both load manager
+implementations. `simulate` on the other hand takes in only one ZooKeeper, the one you are simulating on. It assumes
+that you are simulating on a ZooKeeper that has historical data for `SimpleLoadManagerImpl` and creates equivalent
+historical data for `ModularLoadManagerImpl`. Then, the load according to the historical data is simulated by the
+clients. Finally, `stream` takes in an active ZooKeeper different than the ZooKeeper being simulated on and streams
+load data from it and simulates the real-time load. In all cases, the optional `rate-multiplier` argument allows the
+user to simulate some proportion of the load. For instance, using `--rate-multiplier 0.05` will cause messages to
+be sent at only `5%` of the rate of the load that is being simulated.
+
+## Broker Monitor
+To observe the behavior of the load manager in these simulations, one may utilize the broker monitor, which is
+implemented in `org.apache.pulsar.testclient.BrokerMonitor`. The broker monitor will print tabular load data to the
+console as it is updated using watchers.
+
+### Usage
+To start a broker monitor, use the `monitor-brokers` command in the `pulsar-perf` script:
+
+```
+pulsar-perf monitor-brokers --connect-string <zookeeper host:port>
+```
+
+The console will then continuously print load data until it is interrupted.
+
diff --git a/site2/website/versioned_docs/version-2.7.0/functions-cli.md b/site2/website/versioned_docs/version-2.7.0/functions-cli.md
new file mode 100644
index 0000000..d0f71ca
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/functions-cli.md
@@ -0,0 +1,198 @@
+---
+id: version-2.7.0-functions-cli
+title: Pulsar Functions command line tool
+sidebar_label: Reference: CLI
+original_id: functions-cli
+---
+
+The following tables list Pulsar Functions command-line tools. You can learn Pulsar Functions modes, commands, and parameters.
+
+## localrun
+
+Run Pulsar Functions locally, rather than deploying it to the Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | true |
+broker-service-url | The URL for the Pulsar broker. | |
+classname | The class name of a Pulsar Function.| |
+client-auth-params | Client authentication parameter. | |
+client-auth-plugin | Client authentication plugin using which function-process can connect to broker. |  |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions.  | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+hostname-verification-enabled | Enable hostname verification. | false
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+instance-id-offset | Start the instanceIds from this offset. | 0
+log-topic | The topic to which the logs  a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of  a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. | |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+tls-allow-insecure | Allow insecure tls connection. | false
+tls-trust-cert-path | tls trust cert file path. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+use-tls | Use tls connection. | false
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+
+## create
+
+Create and deploy a Pulsar Function in cluster mode.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | true |
+classname | The class name of a Pulsar Function. |  |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime).| |
+custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+log-topic | The topic to which the logs of a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. |  |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+## delete
+
+Delete a Pulsar Function that is running on a Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## update
+
+Update a Pulsar Function that has been deployed to a Pulsar cluster.
+
+Name | Description | Default
+---|---|---
+auto-ack | Whether or not the framework acknowledges messages automatically. | true |
+classname | The class name of a Pulsar Function. | |
+CPU | The CPU in cores that need to be allocated per function instance (applicable only to docker runtime). | |
+custom-runtime-options | A string that encodes options to customize the runtime, see docs for configured runtime for details | |
+custom-schema-inputs | The map of input topics to Schema class names (as a JSON string). | |
+custom-serde-inputs | The map of input topics to SerDe class names (as a JSON string). | |
+dead-letter-topic | The topic where all messages that were not processed successfully are sent. This parameter is not supported in Python Functions. | |
+disk | The disk in bytes that need to be allocated per function instance (applicable only to docker runtime). | |
+fqfn | The Fully Qualified Function Name (FQFN) for the function. |  |
+function-config-file | The path to a YAML config file specifying the configuration of a Pulsar Function. |  |
+go | Path to the main Go executable binary for the function (if the function is written in Go). |  |
+inputs | The input topic or topics of a Pulsar Function (multiple topics can be specified as a comma-separated list). | |
+jar | Path to the jar file for the function (if the function is written in Java). It also supports URL-path [http/https/file (file protocol assumes that file already exists on worker host)] from which worker can download the package. |  |
+log-topic | The topic to which the logs of a Pulsar Function are produced. |  |
+max-message-retries | How many times should we try to process a message before giving up. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+output | The output topic of a Pulsar Function (If none is specified, no output is written). |  |
+output-serde-classname | The SerDe class to be used for messages output by the function. |  |
+parallelism | The parallelism factor of a Pulsar Function (i.e. the number of function instances to run). |  |
+processing-guarantees | The processing guarantees (delivery semantics) applied to the function. Available values: [ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE]. | ATLEAST_ONCE
+py | Path to the main Python file/Python Wheel file for the function (if the function is written in Python). |  |
+ram | The ram in bytes that need to be allocated per function instance (applicable only to process/docker runtime). |  |
+retain-ordering | Function consumes and processes messages in order. |  |
+schema-type | The builtin schema type or custom schema class name to be used for messages output by the function. | <empty string>
+sliding-interval-count | The number of messages after which the window slides. |  |
+sliding-interval-duration-ms | The time duration after which the window slides. |  |
+subs-name | Pulsar source subscription name if user wants a specific subscription-name for the input-topic consumer. |  |
+tenant | The tenant of a Pulsar Function. |  |
+timeout-ms | The message timeout in milliseconds. |  |
+topics-pattern | The topic pattern to consume from list of topics under a namespace that match the pattern. [--input] and [--topic-pattern] are mutually exclusive. Add SerDe class name for a pattern in --custom-serde-inputs (only supported in Java Function). |  |
+update-auth-data | Whether or not to update the auth data. | false
+user-config | User-defined config key/values. |  |
+window-length-count | The number of messages per window. |  |
+window-length-duration-ms | The time duration of the window in milliseconds. | |
+
+## get
+
+Fetch information about a Pulsar Function.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## restart
+
+Restart function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## stop
+
+Stops function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
+
+## start
+
+Starts a stopped function instance.
+
+Name | Description | Default
+---|---|---
+fqfn | The Fully Qualified Function Name (FQFN) for the function. | |
+instance-id | The function instanceId (restart all instances if instance-id is not provided. |  |
+name | The name of a Pulsar Function. |  |
+namespace | The namespace of a Pulsar Function. |  |
+tenant | The tenant of a Pulsar Function. |  |
diff --git a/site2/website/versioned_docs/version-2.7.0/functions-debug.md b/site2/website/versioned_docs/version-2.7.0/functions-debug.md
new file mode 100644
index 0000000..c143745
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/functions-debug.md
@@ -0,0 +1,461 @@
+---
+id: version-2.7.0-functions-debug
+title: Debug Pulsar Functions
+sidebar_label: How-to: Debug
+original_id: functions-debug
+---
+
+You can use the following methods to debug Pulsar Functions:
+
+* [Captured stderr](functions-debug.md#captured-stderr)
+* [Use unit test](functions-debug.md#use-unit-test)
+* [Debug with localrun mode](functions-debug.md#debug-with-localrun-mode)
+* [Use log topic](functions-debug.md#use-log-topic)
+* [Use Functions CLI](functions-debug.md#use-functions-cli)
+
+## Captured stderr
+
+Function startup information and captured stderr output is written to `logs/functions/<tenant>/<namespace>/<function>/<function>-<instance>.log`
+
+This is useful for debugging why a function fails to start.
+
+## Use unit test
+
+A Pulsar Function is a function with inputs and outputs, you can test a Pulsar Function in a similar way as you test any function.
+
+For example, if you have the following Pulsar Function:
+
+```java
+import java.util.function.Function;
+
+public class JavaNativeExclamationFunction implements Function<String, String> {
+   @Override
+   public String apply(String input) {
+       return String.format("%s!", input);
+   }
+}
+```
+
+You can write a simple unit test to test Pulsar Function.
+
+> #### Tip
+> Pulsar uses testng for testing.
+
+```java
+@Test
+public void testJavaNativeExclamationFunction() {
+   JavaNativeExclamationFunction exclamation = new JavaNativeExclamationFunction();
+   String output = exclamation.apply("foo");
+   Assert.assertEquals(output, "foo!");
+}
+```
+
+The following Pulsar Function implements the `org.apache.pulsar.functions.api.Function` interface.
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+   @Override
+   public String process(String input, Context context) {
+       return String.format("%s!", input);
+   }
+}
+```
+
+In this situation, you can write a unit test for this function as well. Remember to mock the `Context` parameter. The following is an example.
+
+> #### Tip
+> Pulsar uses testng for testing.
+
+```java
+@Test
+public void testExclamationFunction() {
+   ExclamationFunction exclamation = new ExclamationFunction();
+   String output = exclamation.process("foo", mock(Context.class));
+   Assert.assertEquals(output, "foo!");
+}
+```
+
+## Debug with localrun mode
+When you run a Pulsar Function in localrun mode, it launches an instance of the Function on your local machine as a thread.
+
+In this mode, a Pulsar Function consumes and produces actual data to a Pulsar cluster, and mirrors how the function actually runs in a Pulsar cluster.
+
+> Note  
+> Currently, debugging with localrun mode is only supported by Pulsar Functions written in Java. You need Pulsar version 2.4.0 or later to do the following. Even though localrun is available in versions earlier than Pulsar 2.4.0, you cannot debug with localrun mode programmatically or run Functions as threads.
+
+You can launch your function in the following manner.
+
+```java
+FunctionConfig functionConfig = new FunctionConfig();
+functionConfig.setName(functionName);
+functionConfig.setInputs(Collections.singleton(sourceTopic));
+functionConfig.setClassName(ExclamationFunction.class.getName());
+functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+functionConfig.setOutput(sinkTopic);
+
+LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build();
+localRunner.start(true);
+```
+
+So you can debug functions using an IDE easily. Set breakpoints and manually step through a function to debug with real data.
+
+The following example illustrates how to programmatically launch a function in localrun mode.
+
+```java
+public class ExclamationFunction implements Function<String, String> {
+
+   @Override
+   public String process(String s, Context context) throws Exception {
+       return s + "!";
+   }
+
+public static void main(String[] args) throws Exception {
+    FunctionConfig functionConfig = new FunctionConfig();
+    functionConfig.setName("exclamation");
+    functionConfig.setInputs(Collections.singleton("input"));
+    functionConfig.setClassName(ExclamationFunction.class.getName());
+    functionConfig.setRuntime(FunctionConfig.Runtime.JAVA);
+    functionConfig.setOutput("output");
+
+    LocalRunner localRunner = LocalRunner.builder().functionConfig(functionConfig).build();
+    localRunner.start(false);
+}
+```
+
+To use localrun mode programmatically, add the following dependency.
+
+```xml
+<dependency>
+   <groupId>org.apache.pulsar</groupId>
+   <artifactId>pulsar-functions-local-runner</artifactId>
+   <version>${pulsar.version}</version>
+</dependency>
+
+```
+
+For complete code samples, see [here](https://github.com/jerrypeng/pulsar-functions-demos/tree/master/debugging).
+
+> Note   
+> Debugging with localrun mode for Pulsar Functions written in other languages will be supported soon.
+
+## Use log topic
+
+In Pulsar Functions, you can generate log information defined in functions to a specified log topic. You can configure consumers to consume messages from a specified log topic to check the log information.
+
+![Pulsar Functions core programming model](assets/pulsar-functions-overview.png)
+
+**Example** 
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class LoggingFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String messageId = new String(context.getMessageId());
+
+        if (input.contains("danger")) {
+            LOG.warn("A warning was received in message {}", messageId);
+        } else {
+            LOG.info("Message {} received\nContent: {}", messageId, input);
+        }
+
+        return null;
+    }
+}
+```
+
+As shown in the example above, you can get the logger via `context.getLogger()` and assign the logger to the `LOG` variable of `slf4j`, so you can define your desired log information in a function using the `LOG` variable. Meanwhile, you need to specify the topic to which the log information is produced.
+
+**Example** 
+
+```bash
+$ bin/pulsar-admin functions create \
+  --log-topic persistent://public/default/logging-function-logs \
+  # Other function configs
+```
+
+## Use Functions CLI
+
+With [Pulsar Functions CLI](reference-pulsar-admin.md#functions), you can debug Pulsar Functions with the following subcommands:
+
+* `get`
+* `status`
+* `stats`
+* `list`
+* `trigger`
+
+> **Tip**
+> 
+> For complete commands of **Pulsar Functions CLI**, see [here](reference-pulsar-admin.md#functions)。
+
+### `get`
+
+Get information about a Pulsar Function.
+
+**Usage**
+
+```bash
+$ pulsar-admin functions get options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--name`|The name of a Pulsar Function.
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+> **Tip**
+> 
+> `--fqfn` consists of `--name`, `--namespace` and `--tenant`, so you can specify either `--fqfn` or `--name`, `--namespace` and `--tenant`.
+
+**Example** 
+
+You can specify `--fqfn` to get information about a Pulsar Function.
+
+```bash
+$ ./bin/pulsar-admin functions get public/default/ExclamationFunctio6
+```
+Optionally, you can specify `--name`, `--namespace` and `--tenant` to get information about a Pulsar Function.
+
+```bash
+$ ./bin/pulsar-admin functions get \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6
+```
+
+As shown below, the `get` command shows input, output, runtime, and other information about the _ExclamationFunctio6_ function.
+
+```json
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "ExclamationFunctio6",
+  "className": "org.example.test.ExclamationFunction",
+  "inputSpecs": {
+    "persistent://public/default/my-topic-1": {
+      "isRegexPattern": false
+    }
+  },
+  "output": "persistent://public/default/test-1",
+  "processingGuarantees": "ATLEAST_ONCE",
+  "retainOrdering": false,
+  "userConfig": {},
+  "runtime": "JAVA",
+  "autoAck": true,
+  "parallelism": 1
+}
+```
+
+### `status`
+
+Check the current status of a Pulsar Function.
+
+**Usage**
+
+```bash
+$ pulsar-admin functions status options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--instance-id`|The instance ID of a Pulsar Function <br>If the `--instance-id` is not specified, it gets the IDs of all instances.<br>
+|`--name`|The name of a Pulsar Function. 
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+**Example** 
+
+```bash
+$ ./bin/pulsar-admin functions status \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6 \
+```
+
+As shown below, the `status` command shows the number of instances, running instances, the instance running under the _ExclamationFunctio6_ function, received messages, successfully processed messages, system exceptions, the average latency and so on.
+
+```json
+{
+  "numInstances" : 1,
+  "numRunning" : 1,
+  "instances" : [ {
+    "instanceId" : 0,
+    "status" : {
+      "running" : true,
+      "error" : "",
+      "numRestarts" : 0,
+      "numReceived" : 1,
+      "numSuccessfullyProcessed" : 1,
+      "numUserExceptions" : 0,
+      "latestUserExceptions" : [ ],
+      "numSystemExceptions" : 0,
+      "latestSystemExceptions" : [ ],
+      "averageLatency" : 0.8385,
+      "lastInvocationTime" : 1557734137987,
+      "workerId" : "c-standalone-fw-23ccc88ef29b-8080"
+    }
+  } ]
+}
+```
+
+### `stats`
+
+Get the current stats of a Pulsar Function.
+
+**Usage**
+
+```bash
+$ pulsar-admin functions stats options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--instance-id`|The instance ID of a Pulsar Function. <br>If the `--instance-id` is not specified, it gets the IDs of all instances.<br>
+|`--name`|The name of a Pulsar Function. 
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+**Example**
+
+```bash
+$ ./bin/pulsar-admin functions stats \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6 \
+```
+
+The output is shown as follows:
+
+```json
+{
+  "receivedTotal" : 1,
+  "processedSuccessfullyTotal" : 1,
+  "systemExceptionsTotal" : 0,
+  "userExceptionsTotal" : 0,
+  "avgProcessLatency" : 0.8385,
+  "1min" : {
+    "receivedTotal" : 0,
+    "processedSuccessfullyTotal" : 0,
+    "systemExceptionsTotal" : 0,
+    "userExceptionsTotal" : 0,
+    "avgProcessLatency" : null
+  },
+  "lastInvocation" : 1557734137987,
+  "instances" : [ {
+    "instanceId" : 0,
+    "metrics" : {
+      "receivedTotal" : 1,
+      "processedSuccessfullyTotal" : 1,
+      "systemExceptionsTotal" : 0,
+      "userExceptionsTotal" : 0,
+      "avgProcessLatency" : 0.8385,
+      "1min" : {
+        "receivedTotal" : 0,
+        "processedSuccessfullyTotal" : 0,
+        "systemExceptionsTotal" : 0,
+        "userExceptionsTotal" : 0,
+        "avgProcessLatency" : null
+      },
+      "lastInvocation" : 1557734137987,
+      "userMetrics" : { }
+    }
+  } ]
+}
+```
+
+### `list`
+
+List all Pulsar Functions running under a specific tenant and namespace.
+
+**Usage**
+
+```bash
+$ pulsar-admin functions list options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+
+**Example** 
+
+```bash
+$ ./bin/pulsar-admin functions list \
+    --tenant public \
+    --namespace default
+```
+As shown below, the `list` command returns three functions running under the _public_ tenant and the _default_ namespace.
+
+```text
+ExclamationFunctio1
+ExclamationFunctio2
+ExclamationFunctio3
+```
+
+### `trigger`
+
+Trigger a specified Pulsar Function with a supplied value. This command simulates the execution process of a Pulsar Function and verifies it.
+
+**Usage**
+
+```bash
+$ pulsar-admin functions trigger options
+```
+
+**Options**
+
+|Flag|Description
+|---|---
+|`--fqfn`|The Fully Qualified Function Name (FQFN) of a Pulsar Function.
+|`--name`|The name of a Pulsar Function.
+|`--namespace`|The namespace of a Pulsar Function.
+|`--tenant`|The tenant of a Pulsar Function.
+|`--topic`|The topic name that a Pulsar Function consumes from.
+|`--trigger-file`|The path to a file that contains the data to trigger a Pulsar Function.
+|`--trigger-value`|The value to trigger a Pulsar Function.
+
+**Example** 
+
+```bash
+$ ./bin/pulsar-admin functions trigger \
+    --tenant public \
+    --namespace default \
+    --name ExclamationFunctio6 \
+    --topic persistent://public/default/my-topic-1 \
+    --trigger-value "hello pulsar functions"
+```
+
+As shown below, the `trigger` command returns the following result:
+
+```text
+This is my function!
+```
+
+> #### **Note**
+> You must specify the [entire topic name](getting-started-pulsar.md#topic-names) when using the `--topic` option. Otherwise, the following error occurs.
+>
+>```text
+>Function in trigger function has unidentified topic
+>
+>Reason: Function in trigger function has unidentified topic
+>```
diff --git a/site2/website/versioned_docs/version-2.7.0/functions-deploy.md b/site2/website/versioned_docs/version-2.7.0/functions-deploy.md
new file mode 100644
index 0000000..5d85385
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/functions-deploy.md
@@ -0,0 +1,211 @@
+---
+id: version-2.7.0-functions-deploy
+title: Deploy Pulsar Functions
+sidebar_label: How-to: Deploy
+original_id: functions-deploy
+---
+
+## Requirements
+
+To deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this:
+
+* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine.
+* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](deploy-dcos.md), and more.
+
+If you run a non-[standalone](reference-terminology.md#standalone) cluster, you need to obtain the service URL for the cluster. How you obtain the service URL depends on how you deploy your Pulsar cluster.
+
+If you want to deploy and trigger Python user-defined functions, you need to install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) on all the machines running [functions workers](functions-worker.md).
+
+## Command-line interface
+
+Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions.
+
+To learn more commands, refer to [`pulsar-admin functions`](reference-pulsar-admin.md#functions).
+
+### Default arguments
+
+When managing Pulsar Functions, you need to specify a variety of information about functions, including tenant, namespace, input and output topics, and so on. However, some parameters have default values if you do not specify values for them. The following table lists the default values.
+
+Parameter | Default
+:---------|:-------
+Function name | You can specify any value for the class name (except org, library, or similar class names). For example, when you specify the flag `--classname org.example.MyFunction`, the function name is `MyFunction`.
+Tenant | Derived from names of the input topics. If the input topics are under the `marketing` tenant, which means the topic names have the form `persistent://marketing/{namespace}/{topicName}`, the tenant is `marketing`.
+Namespace | Derived from names of the input topics. If the input topics are under the `asia` namespace under the `marketing` tenant, which means the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace is `asia`.
+Output topic | `{input topic}-{function name}-output`. For example, if an input topic name of a function is `incoming`, and the function name is `exclamation`, then the name of the output topic is `incoming-exclamation-output`.
+Subscription type | For `at-least-once` and `at-most-once` [processing guarantees](functions-overview.md#processing-guarantees), the [`SHARED`](concepts-messaging.md#shared) mode is applied by default; for `effectively-once` guarantees, the [`FAILOVER`](concepts-messaging.md#failover) mode is applied.
+Processing guarantees | [`ATLEAST_ONCE`](functions-overview.md#processing-guarantees)
+Pulsar service URL | `pulsar://localhost:6650`
+
+### Example of default arguments
+
+Take the `create` command as an example.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar my-pulsar-functions.jar \
+  --classname org.example.MyFunction \
+  --inputs my-function-input-topic1,my-function-input-topic2
+```
+
+The function has default values for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`).
+
+## Local run mode
+
+If you run a Pulsar Function in **local run** mode, it runs on the machine from which you enter the commands (on your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, and so on). The following is a [`localrun`](reference-pulsar-admin.md#localrun) command example.
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+```
+
+By default, the function connects to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. The following is an example.
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --broker-service-url pulsar://my-cluster-host:6650 \
+  # Other function parameters
+```
+
+## Cluster mode
+
+When you run a Pulsar Function in **cluster** mode, the function code is uploaded to a Pulsar broker and runs *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. 
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+```
+
+### Update functions in cluster mode 
+
+You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. The following command updates the function created in the [cluster mode](#cluster-mode) section.
+
+```bash
+$ bin/pulsar-admin functions update \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/new-input-topic \
+  --output persistent://public/default/new-output-topic
+```
+
+### Parallelism
+
+Pulsar Functions run as processes or threads, which are called **instances**. When you run a Pulsar Function, it runs as a single instance by default. With one localrun command, you can only run a single instance of a function. If you want to run multiple instances, you can use localrun command multiple times. 
+
+When you create a function, you can specify the *parallelism* of a function (the number of instances to run). You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. 
+
+```bash
+$ bin/pulsar-admin functions create \
+  --parallelism 3 \
+  # Other function info
+```
+
+You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface.
+
+```bash
+$ bin/pulsar-admin functions update \
+  --parallelism 5 \
+  # Other function
+```
+
+If you specify a function configuration via YAML, use the `parallelism` parameter. The following is a config file example.
+
+```yaml
+# function-config.yaml
+parallelism: 3
+inputs:
+- persistent://public/default/input-1
+output: persistent://public/default/output-1
+# other parameters
+```
+
+The following is corresponding update command.
+
+```bash
+$ bin/pulsar-admin functions update \
+  --function-config-file function-config.yaml
+```
+
+### Function instance resources
+
+When you run Pulsar Functions in [cluster mode](#cluster-mode), you can specify the resources that are assigned to each function [instance](#parallelism).
+
+Resource | Specified as | Runtimes
+:--------|:----------------|:--------
+CPU | The number of cores | Kubernetes
+RAM | The number of bytes | Process, Docker
+Disk space | The number of bytes | Docker
+
+The following function creation command allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar target/my-functions.jar \
+  --classname org.example.functions.MyFunction \
+  --cpu 8 \
+  --ram 8589934592 \
+  --disk 10737418240
+```
+
+> #### Resources are *per instance*
+> The resources that you apply to a given Pulsar Function are applied to each instance of the function. For example, if you apply 8 GB of RAM to a function with a parallelism of 5, you are applying 40 GB of RAM for the function in total. Make sure that you take the parallelism (the number of instances) factor into your resource calculations.
+
+## Trigger Pulsar Functions
+
+If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function output (if any) via the command line.
+
+> Triggering a function is to invoke a function by producing a message on one of the input topics. With the [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command, you can send messages to functions without using the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library.
+
+To learn how to trigger a function, you can start with Python function that returns a simple string based on the input.
+
+```python
+# myfunc.py
+def process(input):
+    return "This function has been triggered with a value of {0}".format(input)
+```
+
+You can run the function in [local run mode](functions-deploy.md#local-run-mode).
+
+```bash
+$ bin/pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name myfunc \
+  --py myfunc.py \
+  --classname myfunc \
+  --inputs persistent://public/default/in \
+  --output persistent://public/default/out
+```
+
+Then assign a consumer to listen on the output topic for messages from the `myfunc` function with the [`pulsar-client consume`](reference-cli-tools.md#consume) command.
+
+```bash
+$ bin/pulsar-client consume persistent://public/default/out \
+  --subscription-name my-subscription
+  --num-messages 0 # Listen indefinitely
+```
+
+And then you can trigger the function.
+
+```bash
+$ bin/pulsar-admin functions trigger \
+  --tenant public \
+  --namespace default \
+  --name myfunc \
+  --trigger-value "hello world"
+```
+
+The consumer listening on the output topic produces something as follows in the log.
+
+```
+----- got message -----
+This function has been triggered with a value of hello world
+```
+
+> #### Topic info is not required
+> In the `trigger` command, you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you do not need to know the function input topics.
diff --git a/site2/website/versioned_docs/version-2.7.0/functions-metrics.md b/site2/website/versioned_docs/version-2.7.0/functions-metrics.md
new file mode 100644
index 0000000..dd8aa69
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/functions-metrics.md
@@ -0,0 +1,7 @@
+---
+id: version-2.7.0-functions-metrics
+title: Metrics for Pulsar Functions
+sidebar_label: Metrics
+original_id: functions-metrics
+---
+
diff --git a/site2/website/versioned_docs/version-2.7.0/functions-overview.md b/site2/website/versioned_docs/version-2.7.0/functions-overview.md
new file mode 100644
index 0000000..4e40793
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/functions-overview.md
@@ -0,0 +1,192 @@
+---
+id: version-2.7.0-functions-overview
+title: Pulsar Functions overview
+sidebar_label: Overview
+original_id: functions-overview
+---
+
+**Pulsar Functions** are lightweight compute processes that
+
+* consume messages from one or more Pulsar topics,
+* apply a user-supplied processing logic to each message,
+* publish the results of the computation to another topic.
+
+
+## Goals
+With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals:
+
+* Developer productivity (language-native vs Pulsar Functions SDK functions)
+* Easy troubleshooting
+* Operational simplicity (no need for an external processing system)
+
+## Inspirations
+Pulsar Functions are inspired by (and take cues from) several systems and paradigms:
+
+* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org)
+* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/)
+
+Pulsar Functions can be described as
+
+* [Lambda](https://aws.amazon.com/lambda/)-style functions that are
+* specifically designed to use Pulsar as a message bus.
+
+## Programming model
+Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks.   
+
+  * Apply some processing logic to the input and write output to:
+    * An **output topic** in Pulsar
+    * [Apache BookKeeper](#state-storage)
+  * Write logs to a **log topic** (potentially for debugging purposes)
+  * Increment a [counter](#word-count-example)
+
+![Pulsar Functions core programming model](assets/pulsar-functions-overview.png)
+
+You can use Pulsar Functions to set up the following processing chain:
+
+* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic.
+* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic
+* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table.
+
+
+### Word count example
+
+If you implement the classic word count example using Pulsar Functions, it looks something like this:
+
+![Pulsar Functions word count example](assets/pulsar-functions-word-count.png)
+
+To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows.
+
+```java
+package org.example.functions;
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+import java.util.Arrays;
+
+public class WordCountFunction implements Function<String, Void> {
+    // This function is invoked every time a message is published to the input topic
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Arrays.asList(input.split(" ")).forEach(word -> {
+            String counterKey = word.toLowerCase();
+            context.incrCounter(counterKey, 1);
+        });
+        return null;
+    }
+}
+```
+
+Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar target/my-jar-with-dependencies.jar \
+  --classname org.example.functions.WordCountFunction \
+  --tenant public \
+  --namespace default \
+  --name word-count \
+  --inputs persistent://public/default/sentences \
+  --output persistent://public/default/count
+```
+
+### Content-based routing example
+
+Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing.
+
+For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation.
+
+![Pulsar Functions routing example](assets/pulsar-functions-routing-example.png)
+
+If you implement this routing functionality in Python, it looks something like this:
+
+```python
+from pulsar import Function
+
+class RoutingFunction(Function):
+    def __init__(self):
+        self.fruits_topic = "persistent://public/default/fruits"
+        self.vegetables_topic = "persistent://public/default/vegetables"
+
+    def is_fruit(item):
+        return item in [b"apple", b"orange", b"pear", b"other fruits..."]
+
+    def is_vegetable(item):
+        return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."]
+
+    def process(self, item, context):
+        if self.is_fruit(item):
+            context.publish(self.fruits_topic, item)
+        elif self.is_vegetable(item):
+            context.publish(self.vegetables_topic, item)
+        else:
+            warning = "The item {0} is neither a fruit nor a vegetable".format(item)
+            context.get_logger().warn(warning)
+```
+
+If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py ~/router.py \
+  --classname router.RoutingFunction \
+  --tenant public \
+  --namespace default \
+  --name route-fruit-veg \
+  --inputs persistent://public/default/basket-items
+```
+
+### Functions, messages and message types
+Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. 
+* [Schema Registry](functions-develop.md#schema-registry)
+* [SerDe](functions-develop.md#serde)
+
+
+## Fully Qualified Function Name (FQFN)
+Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this:
+
+```http
+tenant/namespace/name
+```
+
+FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces.
+
+## Supported languages
+Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop.md).
+
+## Processing guarantees
+Pulsar Functions provide three different messaging semantics that you can apply to any function.
+
+Delivery semantics | Description
+:------------------|:-------
+**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most").
+**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least").
+**Effectively-once** delivery | Each message sent to the function will have one output associated with it.
+
+
+### Apply processing guarantees to a function
+You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied.
+
+```bash
+$ bin/pulsar-admin functions create \
+  --name my-effectively-once-function \
+  --processing-guarantees EFFECTIVELY_ONCE \
+  # Other function configs
+```
+
+The available options for `--processing-guarantees` are:
+
+* `ATMOST_ONCE`
+* `ATLEAST_ONCE`
+* `EFFECTIVELY_ONCE`
+
+> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees.
+
+### Update the processing guarantees of a function
+You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example.
+
+```bash
+$ bin/pulsar-admin functions update \
+  --processing-guarantees ATMOST_ONCE \
+  # Other function configs
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/getting-started-concepts-and-architecture.md b/site2/website/versioned_docs/version-2.7.0/getting-started-concepts-and-architecture.md
new file mode 100644
index 0000000..6933d45
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/getting-started-concepts-and-architecture.md
@@ -0,0 +1,16 @@
+---
+id: version-2.7.0-concepts-architecture
+title: Pulsar concepts and architecture
+sidebar_label: Concepts and architecture
+original_id: concepts-architecture
+---
+
+
+
+
+
+
+
+
+
+
diff --git a/site2/website/versioned_docs/version-2.7.0/getting-started-docker.md b/site2/website/versioned_docs/version-2.7.0/getting-started-docker.md
new file mode 100644
index 0000000..fe4f9a6
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/getting-started-docker.md
@@ -0,0 +1,161 @@
+---
+id: version-2.7.0-standalone-docker
+title: Set up a standalone Pulsar in Docker
+sidebar_label: Run Pulsar in Docker
+original_id: standalone-docker
+---
+
+For local development and testing, you can run Pulsar in standalone
+mode on your own machine within a Docker container.
+
+If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
+and follow the instructions for your OS.
+
+## Start Pulsar in Docker
+
+* For MacOS, Linux, and Windows:
+
+  ```shell
+  $ docker run -it \
+    -p 6650:6650 \
+    -p 8080:8080 \
+    --mount source=pulsardata,target=/pulsar/data \
+    --mount source=pulsarconf,target=/pulsar/conf \
+    apachepulsar/pulsar:{{pulsar:version}} \
+    bin/pulsar standalone
+  ```
+
+A few things to note about this command:
+ * The data, metadata, and configuration are persisted on Docker volumes in order to not start "fresh" every 
+time the container is restarted. For details on the volumes you can use `docker volume inspect <sourcename>`
+ * For Docker on Windows make sure to configure it to use Linux containers
+
+If you start Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```
+2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
+2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
+...
+```
+
+> #### Tip
+> 
+> When you start a local standalone cluster, a `public/default`
+namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
+For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar in Docker
+
+Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) 
+and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can
+use one of these root URLs to interact with your cluster:
+
+* `pulsar://localhost:6650`
+* `http://localhost:8080`
+
+The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python.md)
+client API.
+
+Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
+
+```shell
+$ pip install pulsar-client
+```
+
+### Consume a message
+
+Create a consumer and subscribe to the topic:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+consumer = client.subscribe('my-topic',
+                            subscription_name='my-sub')
+
+while True:
+    msg = consumer.receive()
+    print("Received message: '%s'" % msg.data())
+    consumer.acknowledge(msg)
+
+client.close()
+```
+
+### Produce a message
+
+Now start a producer to send some test messages:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+producer = client.create_producer('my-topic')
+
+for i in range(10):
+    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
+
+client.close()
+```
+
+## Get the topic statistics
+
+In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
+For details on APIs, refer to [Admin API Overview](admin-api-overview.md).
+
+In the simplest example, you can use curl to probe the stats for a particular topic:
+
+```shell
+$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
+```
+
+The output is something like this:
+
+```json
+{
+  "averageMsgSize": 0.0,
+  "msgRateIn": 0.0,
+  "msgRateOut": 0.0,
+  "msgThroughputIn": 0.0,
+  "msgThroughputOut": 0.0,
+  "publishers": [
+    {
+      "address": "/172.17.0.1:35048",
+      "averageMsgSize": 0.0,
+      "clientVersion": "1.19.0-incubating",
+      "connectedSince": "2017-08-09 20:59:34.621+0000",
+      "msgRateIn": 0.0,
+      "msgThroughputIn": 0.0,
+      "producerId": 0,
+      "producerName": "standalone-0-1"
+    }
+  ],
+  "replication": {},
+  "storageSize": 16,
+  "subscriptions": {
+    "my-sub": {
+      "blockedSubscriptionOnUnackedMsgs": false,
+      "consumers": [
+        {
+          "address": "/172.17.0.1:35064",
+          "availablePermits": 996,
+          "blockedConsumerOnUnackedMsgs": false,
+          "clientVersion": "1.19.0-incubating",
+          "connectedSince": "2017-08-09 21:05:39.222+0000",
+          "consumerName": "166111",
+          "msgRateOut": 0.0,
+          "msgRateRedeliver": 0.0,
+          "msgThroughputOut": 0.0,
+          "unackedMessages": 0
+        }
+      ],
+      "msgBacklog": 0,
+      "msgRateExpired": 0.0,
+      "msgRateOut": 0.0,
+      "msgRateRedeliver": 0.0,
+      "msgThroughputOut": 0.0,
+      "type": "Exclusive",
+      "unackedMessages": 0
+    }
+  }
+}
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/getting-started-pulsar.md b/site2/website/versioned_docs/version-2.7.0/getting-started-pulsar.md
new file mode 100644
index 0000000..674e90b
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/getting-started-pulsar.md
@@ -0,0 +1,67 @@
+---
+id: version-2.7.0-pulsar-2.0
+title: Pulsar 2.0
+sidebar_label: Pulsar 2.0
+original_id: pulsar-2.0
+---
+
+Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview.md) feature, some terminology changes, and more.
+
+## New features in Pulsar 2.0
+
+Feature | Description
+:-------|:-----------
+[Pulsar Functions](functions-overview.md) | A lightweight compute option for Pulsar
+
+## Major changes
+
+There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage.
+
+### Properties versus tenants
+
+Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be r [...]
+
+### Topic names
+
+Prior to version 2.0, *all* Pulsar topics had the following form:
+
+```http
+{persistent|non-persistent}://property/cluster/namespace/topic
+```
+Two important changes have been made in Pulsar 2.0:
+
+* There is no longer a [cluster component](#no-cluster)
+* Properties have been [renamed to tenants](#tenants)
+* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names
+* `/` is not allowed in topic name
+
+#### No cluster component
+
+The cluster component has been removed from topic names. Thus, all topic names now have the following form:
+
+```http
+{persistent|non-persistent}://tenant/namespace/topic
+```
+
+> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that.
+
+
+#### Flexible topic naming
+
+All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace:
+
+Topic aspect | Default
+:------------|:-------
+topic type | `persistent`
+tenant | `public`
+namespace | `default`
+
+The table below shows some example topic name translations that use implicit defaults:
+
+Input topic name | Translated topic name
+:----------------|:---------------------
+`my-topic` | `persistent://public/default/my-topic`
+`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic`
+
+> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead
+
diff --git a/site2/website/versioned_docs/version-2.7.0/getting-started-standalone.md b/site2/website/versioned_docs/version-2.7.0/getting-started-standalone.md
new file mode 100644
index 0000000..274c87c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/getting-started-standalone.md
@@ -0,0 +1,226 @@
+---
+id: version-2.7.0-standalone
+title: Set up a standalone Pulsar locally
+sidebar_label: Run Pulsar locally
+original_id: standalone
+---
+
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
+
+> #### Pulsar in production? 
+> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide.
+
+## Install Pulsar standalone
+
+This tutorial guides you through every step of the installation process.
+
+### System requirements
+
+Pulsar is currently available for **MacOS** and **Linux**. To use Pulsar, you need to install Java 8 from [Oracle download center](http://www.oracle.com/).
+
+> #### Tip
+> By default, Pulsar allocates 2G JVM heap memory to start. It can be changed in `conf/pulsar_env.sh` file under `PULSAR_MEM`. This is extra options passed into JVM. 
+
+### Install Pulsar using binary release
+
+To get started with Pulsar, download a binary tarball release in one of the following ways:
+
+* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>)
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)  
+  
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+  
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:binary_release_url
+  ```
+
+After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+#### What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md).
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`examples` | A Java JAR file containing [Pulsar Functions](functions-overview.md) example.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
+
+These directories are created once you begin running Pulsar.
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`instances` | Artifacts created for [Pulsar Functions](functions-overview.md).
+`logs` | Logs created by the installation.
+
+> #### Tip
+> If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
+> 
+> * [Install builtin connectors (optional)](#install-builtin-connectors-optional)
+> * [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
+> 
+> Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
+
+### Install builtin connectors (optional)
+
+Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url/{connector}-{{pulsar:version}}.nar
+  ```
+
+After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
+For example, if you download the `pulsar-io-aerospike-{{pulsar:version}}.nar` connector file, enter the following commands:
+
+```bash
+$ mkdir connectors
+$ mv pulsar-io-aerospike-{{pulsar:version}}.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+...
+```
+
+> #### Note
+>
+> * If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
+> (or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
+> 
+> * If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos.md)),
+> you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+### Install tiered storage offloaders (optional)
+
+> #### Tip
+>
+> Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+> To enable tiered storage feature, follow the instructions below; otherwise skip this section.
+
+To get started with [tiered storage offloaders](concepts-tiered-storage.md), you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders {{pulsar:version}} release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:offloader_release_url
+  ```
+
+After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
+in the pulsar directory:
+
+```bash
+$ tar xvfz apache-pulsar-offloaders-{{pulsar:version}}-bin.tar.gz
+
+// you will find a directory named `apache-pulsar-offloaders-{{pulsar:version}}` in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-{{pulsar:version}}/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-{{pulsar:version}}.nar
+```
+
+For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage.md).
+
+> #### Note
+>
+> * If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
+> 
+> * If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos.md)),
+> you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
+
+## Start Pulsar standalone
+
+Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
+
+```bash
+$ bin/pulsar standalone
+```
+
+If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```bash
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
+2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
+```
+
+> #### Tip
+> 
+> * The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
+You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+> 
+> * By default, there is no encryption, authentication, or authorization configured. Apache Pulsar can be accessed from remote server without any authorization. Please do check [Security Overview](security-overview.md) document to secure your deployment.
+>
+> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar standalone
+
+Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
+
+### Consume a message
+
+The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
+
+```bash
+$ bin/pulsar-client consume my-topic -s "first-subscription"
+```
+
+If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
+
+```
+09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
+```
+
+> #### Tip
+>  
+> As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
+
+### Produce a message
+
+The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
+
+```bash
+$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
+```
+
+If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
+
+```
+13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
+```
+
+## Stop Pulsar standalone
+
+Press `Ctrl+C` to stop a local standalone Pulsar.
+
+> #### Tip
+> 
+> If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
+> 
+> For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
diff --git a/site2/website/versioned_docs/version-2.7.0/helm-install.md b/site2/website/versioned_docs/version-2.7.0/helm-install.md
new file mode 100644
index 0000000..a1ec21d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/helm-install.md
@@ -0,0 +1,41 @@
+---
+id: version-2.7.0-helm-install
+title: Install Apache Pulsar using Helm
+sidebar_label: Install
+original_id: helm-install
+---
+
+Install Apache Pulsar on Kubernetes with the official Pulsar Helm chart.
+
+## Requirements
+
+To deploy Apache Pulsar on Kubernetes, the followings are required.
+
+- kubectl 1.14 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin))
+- Helm v3 (3.0.2 or higher)
+- A Kubernetes cluster, version 1.14 or higher
+
+## Environment setup
+
+Before deploying Pulsar, you need to prepare your environment.
+
+### Tools
+
+Install [`helm`](helm-tools.md) and [`kubectl`](helm-tools.md) on your computer.
+
+## Cloud cluster preparation
+
+> #### Note 
+> Kubernetes 1.14 or higher is required.
+
+To create and connect to the Kubernetes cluster, follow the instructions:
+
+- [Google Kubernetes Engine](helm-prepare.md#google-kubernetes-engine)
+
+## Pulsar deployment
+
+Once the environment is set up and configuration is generated, you can now proceed to the [deployment of Pulsar](helm-deploy.md).
+
+## Pulsar upgrade
+
+To upgrade an existing Kubernetes installation, follow the [upgrade documentation](helm-upgrade.md).
diff --git a/site2/website/versioned_docs/version-2.7.0/helm-prepare.md b/site2/website/versioned_docs/version-2.7.0/helm-prepare.md
new file mode 100644
index 0000000..fccf2d0
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/helm-prepare.md
@@ -0,0 +1,85 @@
+---
+id: version-2.7.0-helm-prepare
+title: Prepare Kubernetes resources
+sidebar_label: Prepare
+original_id: helm-prepare
+---
+
+For a fully functional Pulsar cluster, you need a few resources before deploying the Apache Pulsar Helm chart. The following provides instructions to prepare the Kubernetes cluster before deploying the Pulsar Helm chart.
+
+- [Google Kubernetes Engine](#google-kubernetes-engine)
+  - [Manual cluster creation](#manual-cluster-creation)
+  - [Scripted cluster creation](#scripted-cluster-creation)
+    - [Create cluster with local SSDs](#create-cluster-with-local-ssds)
+- [Next Steps](#next-steps)
+
+## Google Kubernetes Engine
+
+To get started easier, a script is provided to create the cluster automatically. Alternatively, a cluster can be created manually as well.
+
+- [Google Kubernetes Engine](#google-kubernetes-engine)
+  - [Manual cluster creation](#manual-cluster-creation)
+  - [Scripted cluster creation](#scripted-cluster-creation)
+    - [Create cluster with local SSDs](#create-cluster-with-local-ssds)
+- [Next Steps](#next-steps)
+
+### Manual cluster creation
+
+To provision a Kubernetes cluster manually, follow the [GKE instructions](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster).
+
+Alternatively, you can use the [instructions](#scripted-cluster-creation) below to provision a GKE cluster as needed.
+
+### Scripted cluster creation
+
+A [bootstrap script](https://github.com/streamnative/charts/tree/master/scripts/pulsar/gke_bootstrap_script.sh) has been created to automate much of the setup process for users on GCP/GKE.
+
+The script can:
+
+1. Create a new GKE cluster.
+2. Allow the cluster to modify DNS (Domain Name Server) records.
+3. Setup `kubectl`, and connect it to the cluster.
+
+Google Cloud SDK is a dependency of this script, so ensure it is [set up correctly](helm-tools.md#connect-to-a-gke-cluster) for the script to work.
+
+The script reads various parameters from environment variables and an argument `up` or `down` for bootstrap and clean-up respectively.
+
+The following table describes all variables.
+
+| **Variable** | **Description** | **Default value** |
+| ------------ | --------------- | ----------------- |
+| PROJECT      | ID of your GCP project | No default value. It requires to be set. |
+| CLUSTER_NAME | Name of the GKE cluster | `pulsar-dev` |
+| CONFDIR | Configuration directory to store Kubernetes configuration | ${HOME}/.config/streamnative |
+| INT_NETWORK | IP space to use within this cluster | `default` |
+| LOCAL_SSD_COUNT | Number of local SSD counts | 4 |
+| MACHINE_TYPE | Type of machine to use for nodes | `n1-standard-4` |
+| NUM_NODES | Number of nodes to be created in each of the cluster's zones | 4 |
+| PREEMPTIBLE | Create nodes using preemptible VM instances in the new cluster. | false |
+| REGION | Compute region for the cluster | `us-east1` |
+| USE_LOCAL_SSD | Flag to create a cluster with local SSDs | false |
+| ZONE | Compute zone for the cluster | `us-east1-b` |
+| ZONE_EXTENSION | The extension (`a`, `b`, `c`) of the zone name of the cluster | `b` |
+| EXTRA_CREATE_ARGS | Extra arguments passed to create command | |
+
+Run the script, by passing in your desired parameters. It can work with the default parameters except for `PROJECT` which is required:
+
+```bash
+PROJECT=<gcloud project id> scripts/pulsar/gke_bootstrap_script.sh up
+```
+
+The script can also be used to clean up the created GKE resources.
+
+```bash
+PROJECT=<gcloud project id> scripts/pulsar/gke_bootstrap_script.sh down
+```
+
+#### Create cluster with local SSDs
+
+To install a Pulsar Helm chart using local persistent volumes, you need to create a GKE cluster with local SSDs. You can do so Specifying the `USE_LOCAL_SSD` to be `true` in the following command to create a Pulsar cluster with local SSDs.
+
+```
+PROJECT=<gcloud project id> USE_LOCAL_SSD=true LOCAL_SSD_COUNT=<local-ssd-count> scripts/pulsar/gke_bootstrap_script.sh up
+```
+## Next Steps
+
+Continue with the [installation of the chart](helm-deploy.md) once you have the cluster up and running.
diff --git a/site2/website/versioned_docs/version-2.7.0/helm-tools.md b/site2/website/versioned_docs/version-2.7.0/helm-tools.md
new file mode 100644
index 0000000..7be76dd
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/helm-tools.md
@@ -0,0 +1,43 @@
+---
+id: version-2.7.0-helm-tools
+title: Required tools for deploying Pulsar Helm Chart
+sidebar_label: Required Tools
+original_id: helm-tools
+---
+
+Before deploying Pulsar to your Kubernetes cluster, there are some tools you must have installed locally.
+
+## kubectl
+
+kubectl is the tool that talks to the Kubernetes API. kubectl 1.14 or higher is required and it needs to be compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin)).
+
+To Install kubectl locally, follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl).
+
+The server version of kubectl cannot be obtained until we connect to a cluster.
+
+## Helm
+
+Helm is the package manager for Kubernetes. The Apache Pulsar Helm Chart is tested and supported with Helm v3.
+
+### Get Helm
+
+You can get Helm from the project's [releases page](https://github.com/helm/helm/releases), or follow other options under the official documentation of [installing Helm](https://helm.sh/docs/intro/install/).
+
+### Next steps
+
+Once kubectl and Helm are configured, you can configure your [Kubernetes cluster](helm-prepare.md).
+
+## Additional information
+
+### Templates
+
+Templating in Helm is done through Golang's [text/template](https://golang.org/pkg/text/template/) and [sprig](https://godoc.org/github.com/Masterminds/sprig).
+
+For more information about how all the inner workings behave, check these documents:
+
+- [Functions and Pipelines](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/)
+- [Subcharts and Globals](https://helm.sh/docs/chart_template_guide/subcharts_and_globals/)
+
+### Tips and tricks
+
+For additional information on developing with Helm, check [tips and tricks section](https://helm.sh/docs/howto/charts_tips_and_tricks/) in the Helm repository.
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/io-aerospike-sink.md b/site2/website/versioned_docs/version-2.7.0/io-aerospike-sink.md
new file mode 100644
index 0000000..b40764e
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-aerospike-sink.md
@@ -0,0 +1,26 @@
+---
+id: version-2.7.0-io-aerospike-sink
+title: Aerospike sink connector
+sidebar_label: Aerospike sink connector
+original_id: io-aerospike-sink
+---
+
+The Aerospike sink connector pulls messages from Pulsar topics to Aerospike clusters.
+
+## Configuration
+
+The configuration of the Aerospike sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `seedHosts` |String| true | No default value| The comma-separated list of one or more Aerospike cluster hosts.<br><br>Each host can be specified as a valid IP address or hostname followed by an optional port number. | 
+| `keyspace` | String| true |No default value |The Aerospike namespace. |
+| `columnName` | String | true| No default value|The Aerospike column name. |
+|`userName`|String|false|NULL|The Aerospike username.|
+|`password`|String|false|NULL|The Aerospike password.|
+| `keySet` | String|false |NULL | The Aerospike set name. |
+| `maxConcurrentRequests` |int| false | 100 | The maximum number of concurrent Aerospike transactions that a sink can open. |
+| `timeoutMs` | int|false | 100 | This property controls `socketTimeout` and `totalTimeout` for Aerospike transactions.  |
+| `retries` | int|false | 1 |The maximum number of retries before aborting a write transaction to Aerospike. |
diff --git a/site2/website/versioned_docs/version-2.7.0/io-canal-source.md b/site2/website/versioned_docs/version-2.7.0/io-canal-source.md
new file mode 100644
index 0000000..ea2d763
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-canal-source.md
@@ -0,0 +1,203 @@
+---
+id: version-2.7.0-io-canal-source
+title: Canal source connector
+sidebar_label: Canal source connector
+original_id: io-canal-source
+---
+
+The Canal source connector pulls messages from MySQL to Pulsar topics.
+
+## Configuration
+
+The configuration of Canal source connector has the following properties.
+
+### Property
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `username` | true | None | Canal server account (not MySQL).|
+| `password` | true | None | Canal server password (not MySQL). |
+|`destination`|true|None|Source destination that Canal source connector connects to.
+| `singleHostname` | false | None | Canal server address.|
+| `singlePort` | false | None | Canal server port.|
+| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.<br/><br/><li>true: **cluster** mode.<br/>If set to true, it talks to `zkServers` to figure out the actual database host.<br/><br/><li>false: **standalone** mode.<br/>If set to false, it connects to the database specified by `singleHostname` and `singlePort`. |
+| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.|
+| `batchSize` | false | 1000 | Batch size to fetch from Canal. |
+
+### Example
+
+Before using the Canal connector, you can create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "zkServers": "127.0.0.1:2181",
+        "batchSize": "5120",
+        "destination": "example",
+        "username": "",
+        "password": "",
+        "cluster": false,
+        "singleHostname": "127.0.0.1",
+        "singlePort": "11111",
+    }
+    ```
+
+* YAML
+
+    You can create a YAML file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/resources/canal-mysql-source-config.yaml) below to your YAML file.
+
+    ```yaml
+    configs:
+        zkServers: "127.0.0.1:2181"
+        batchSize: 5120
+        destination: "example"
+        username: ""
+        password: ""
+        cluster: false
+        singleHostname: "127.0.0.1"
+        singlePort: 11111
+    ```
+
+## Usage
+
+Here is an example of storing MySQL data using the configuration file as above.
+
+1. Start a MySQL server.
+
+    ```bash
+    $ docker pull mysql:5.7
+    $ docker run -d -it --rm --name pulsar-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=canal -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw mysql:5.7
+    ```
+
+2. Create a configuration file `mysqld.cnf`.
+
+    ```bash
+    [mysqld]
+    pid-file    = /var/run/mysqld/mysqld.pid
+    socket      = /var/run/mysqld/mysqld.sock
+    datadir     = /var/lib/mysql
+    #log-error  = /var/log/mysql/error.log
+    # By default we only accept connections from localhost
+    #bind-address   = 127.0.0.1
+    # Disabling symbolic-links is recommended to prevent assorted security risks
+    symbolic-links=0
+    log-bin=mysql-bin
+    binlog-format=ROW
+    server_id=1
+    ```
+
+3. Copy the configuration file `mysqld.cnf` to MySQL server.
+   
+    ```bash
+    $ docker cp mysqld.cnf pulsar-mysql:/etc/mysql/mysql.conf.d/
+    ```
+
+4.  Restart the MySQL server.
+   
+    ```bash
+    $ docker restart pulsar-mysql
+    ```
+
+5.  Create a test database in MySQL server.
+   
+    ```bash
+    $ docker exec -it pulsar-mysql /bin/bash
+    $ mysql -h 127.0.0.1 -uroot -pcanal -e 'create database test;'
+    ```
+
+6. Start a Canal server and connect to MySQL server.
+
+    ```
+    $ docker pull canal/canal-server:v1.1.2
+    $ docker run -d -it --link pulsar-mysql -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=pulsar-mysql:3306 -e canal.instance.dbUsername=root -e canal.instance.dbPassword=canal -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=pulsar-canal-server -p 8000:8000 -p 2222:2222 -p 11111:11111 -p 11112:11112 -m 4096m canal/canal-server:v1.1.2
+    ```
+
+7. Start Pulsar standalone.
+
+    ```bash
+    $ docker pull apachepulsar/pulsar:2.3.0
+    $ docker run -d -it --link pulsar-canal-server -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:2.3.0 bin/pulsar standalone
+    ```
+
+8. Modify the configuration file `canal-mysql-source-config.yaml`.
+
+    ```yaml
+    configs:
+        zkServers: ""
+        batchSize: "5120"
+        destination: "test"
+        username: ""
+        password: ""
+        cluster: false
+        singleHostname: "pulsar-canal-server"
+        singlePort: "11111"
+    ```
+
+9. Create a consumer file `pulsar-client.py`.
+
+    ```python
+    import pulsar
+
+    client = pulsar.Client('pulsar://localhost:6650')
+    consumer = client.subscribe('my-topic',
+                                subscription_name='my-sub')
+
+    while True:
+        msg = consumer.receive()
+        print("Received message: '%s'" % msg.data())
+        consumer.acknowledge(msg)
+
+    client.close()
+    ```
+
+10. Copy the configuration file `canal-mysql-source-config.yaml` and the consumer file  `pulsar-client.py` to Pulsar server.
+
+    ```bash
+    $ docker cp canal-mysql-source-config.yaml pulsar-standalone:/pulsar/conf/
+    $ docker cp pulsar-client.py pulsar-standalone:/pulsar/
+    ```
+
+11. Download a Canal connector and start it.
+    
+    ```bash
+    $ docker exec -it pulsar-standalone /bin/bash
+    $ wget https://archive.apache.org/dist/pulsar/pulsar-2.3.0/connectors/pulsar-io-canal-2.3.0.nar -P connectors
+    $ ./bin/pulsar-admin source localrun \
+    --archive ./connectors/pulsar-io-canal-2.3.0.nar \
+    --classname org.apache.pulsar.io.canal.CanalStringSource \
+    --tenant public \
+    --namespace default \
+    --name canal \
+    --destination-topic-name my-topic \
+    --source-config-file /pulsar/conf/canal-mysql-source-config.yaml \
+    --parallelism 1
+    ```
+
+12. Consume data from MySQL. 
+
+    ```bash
+    $ docker exec -it pulsar-standalone /bin/bash
+    $ python pulsar-client.py
+    ```
+
+13. Open another window to log in MySQL server.
+
+    ```bash
+    $ docker exec -it pulsar-mysql /bin/bash
+    $ mysql -h 127.0.0.1 -uroot -pcanal
+    ```
+
+14. Create a table, and insert, delete, and update data in MySQL server.
+    
+    ```bash
+    mysql> use test;
+    mysql> show tables;
+    mysql> CREATE TABLE IF NOT EXISTS `test_table`(`test_id` INT UNSIGNED AUTO_INCREMENT,`test_title` VARCHAR(100) NOT NULL,
+    `test_author` VARCHAR(40) NOT NULL,
+    `test_date` DATE,PRIMARY KEY ( `test_id` ))ENGINE=InnoDB DEFAULT CHARSET=utf8;
+    mysql> INSERT INTO test_table (test_title, test_author, test_date) VALUES("a", "b", NOW());
+    mysql> UPDATE test_table SET test_title='c' WHERE test_title='a';
+    mysql> DELETE FROM test_table WHERE test_title='c';
+    ```
+
diff --git a/site2/website/versioned_docs/version-2.7.0/io-cassandra-sink.md b/site2/website/versioned_docs/version-2.7.0/io-cassandra-sink.md
new file mode 100644
index 0000000..a1809cc
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-cassandra-sink.md
@@ -0,0 +1,54 @@
+---
+id: version-2.7.0-io-cassandra-sink
+title: Cassandra sink connector
+sidebar_label: Cassandra sink connector
+original_id: io-cassandra-sink
+---
+
+The Cassandra sink connector pulls messages from Pulsar topics to Cassandra clusters.
+
+## Configuration
+
+The configuration of the Cassandra sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `roots` | String|true | " " (empty string) | A comma-separated list of Cassandra hosts to connect to.|
+| `keyspace` | String|true| " " (empty string)| The key space used for writing pulsar messages. <br><br>**Note: `keyspace` should be created prior to a Cassandra sink.**|
+| `keyname` | String|true| " " (empty string)| The key name of the Cassandra column family. <br><br>The column is used for storing Pulsar message keys. <br><br>If a Pulsar message doesn't have any key associated, the message value is used as the key. |
+| `columnFamily` | String|true| " " (empty string)| The Cassandra column family name.<br><br>**Note: `columnFamily` should be created prior to a Cassandra sink.**|
+| `columnName` | String|true| " " (empty string) | The column name of the Cassandra column family.<br><br> The column is used for storing Pulsar message values. |
+
+### Example
+
+Before using the Cassandra sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON
+
+    ```json
+    {
+        "roots": "localhost:9042",
+        "keyspace": "pulsar_test_keyspace",
+        "columnFamily": "pulsar_test_table",
+        "keyname": "key",
+        "columnName": "col"
+    }
+    ```
+
+* YAML
+  
+    ```
+    configs:
+        roots: "localhost:9042"
+        keyspace: "pulsar_test_keyspace"
+        columnFamily: "pulsar_test_table"
+        keyname: "key"
+        columnName: "col"
+    ```
+
+
+## Usage
+
+For more information about **how to connect Pulsar with Cassandra**, see [here](io-quickstart.md#connect-pulsar-to-apache-cassandra).
diff --git a/site2/website/versioned_docs/version-2.7.0/io-cdc-debezium.md b/site2/website/versioned_docs/version-2.7.0/io-cdc-debezium.md
new file mode 100644
index 0000000..ccbc871
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-cdc-debezium.md
@@ -0,0 +1,475 @@
+---
+id: version-2.7.0-io-cdc-debezium
+title: Debezium source connector
+sidebar_label: Debezium source connector
+original_id: io-cdc-debezium
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br/><br/> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br/><br/>**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+    ```json
+    {
+        "database.hostname": "localhost",
+        "database.port": "3306",
+        "database.user": "debezium",
+        "database.password": "dbz",
+        "database.server.id": "184054",
+        "database.server.name": "dbserver1",
+        "database.whitelist": "inventory",
+        "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+        "database.history.pulsar.topic": "history-topic",
+        "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+        "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+        "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650",
+        "offset.storage.topic": "offset-topic"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-mysql-source"
+    topicName: "debezium-mysql-topic"
+    archive: "connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for mysql, docker image: debezium/example-mysql:0.8
+        database.hostname: "localhost"
+        database.port: "3306"
+        database.user: "debezium"
+        database.password: "dbz"
+        database.server.id: "184054"
+        database.server.name: "dbserver1"
+        database.whitelist: "inventory"
+        database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+        database.history.pulsar.topic: "history-topic"
+        database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+        ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+        key.converter: "org.apache.kafka.connect.json.JsonConverter"
+        value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+        ## OFFSET_STORAGE_TOPIC_CONFIG
+        offset.storage.topic: "offset-topic"
+    ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker run -it --rm \
+    --name mysql \
+    -p 3306:3306 \
+    -e MYSQL_ROOT_PASSWORD=debezium \
+    -e MYSQL_USER=mysqluser \
+    -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+    ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+     * Use the **JSON** configuration file as shown previously. 
+   
+        Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar \
+        --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","v [...]
+        ```
+
+    * Use the **YAML** configuration file as shown previously.
+  
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --source-config-file debezium-mysql-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+    ```bash
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a MySQL client in docker.
+
+    ```bash
+    $ docker run -it --rm \
+    --name mysqlterm \
+    --link mysql \
+    --rm mysql:5.7 sh \
+    -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+    ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+    ```
+    mysql> use inventory;
+    mysql> show tables;
+    mysql> SELECT * FROM  products;
+    mysql> UPDATE products SET name='1111111111' WHERE id=101;
+    mysql> UPDATE products SET name='1111111111' WHERE id=107;
+    ```
+
+    In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+    ```json
+    {
+        "database.hostname": "localhost",
+        "database.port": "5432",
+        "database.user": "postgres",
+        "database.password": "postgres",
+        "database.dbname": "postgres",
+        "database.server.name": "dbserver1",
+        "schema.whitelist": "inventory",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-postgres-source"
+    topicName: "debezium-postgres-topic"
+    archive: "connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for pg, docker image: debezium/example-postgress:0.8
+        database.hostname: "localhost"
+        database.port: "5432"
+        database.user: "postgres"
+        database.password: "postgres"
+        database.dbname: "postgres"
+        database.server.name: "dbserver1"
+        schema.whitelist: "inventory"
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+    ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker pull debezium/example-postgres:0.8
+    $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+    ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+     
+     Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar \
+        --name debezium-postgres-source \
+        --destination-topic-name debezium-postgres-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+        ```
+   
+   * Use the **YAML** configuration file as shown previously.
+      
+        ```bash
+        $ bin/pulsar-admin source localrun  \
+        --source-config-file debezium-postgres-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+    ```
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a PostgreSQL client in docker.
+   
+    ```bash
+    $ docker exec -it pulsar-postgresql /bin/bash
+    ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+    ```
+    psql -U postgres postgres
+    postgres=# \c postgres;
+    You are now connected to database "postgres" as user "postgres".
+    postgres=# SET search_path TO inventory;
+    SET
+    postgres=# select * from products;
+     id  |        name        |                       description                       | weight
+    -----+--------------------+---------------------------------------------------------+--------
+     102 | car battery        | 12V car battery                                         |    8.1
+     103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+     104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+     105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+     106 | hammer             | 16oz carpenter's hammer                                 |      1
+     107 | rocks              | box of assorted rocks                                   |    5.3
+     108 | jacket             | water resistent black wind breaker                      |    0.1
+     109 | spare tire         | 24 inch spare tire                                      |   22.2
+     101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+    (9 rows)
+    
+    postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+    UPDATE 1
+    ```
+
+    In the terminal window of subscribing topic, you can receive the following messages.
+        
+    ```bash
+    ----- got message -----
+    {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products [...]
+    ```
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+    ```json
+    {
+        "mongodb.hosts": "rs0/mongodb:27017",
+        "mongodb.name": "dbserver1",
+        "mongodb.user": "debezium",
+        "mongodb.password": "dbz",
+        "mongodb.task.id": "1",
+        "database.whitelist": "inventory",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-mongodb-source"
+    topicName: "debezium-mongodb-topic"
+    archive: "connectors/pulsar-io-debezium-mongodb-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for pg, docker image: debezium/example-postgress:0.10
+        mongodb.hosts: "rs0/mongodb:27017",
+        mongodb.name: "dbserver1",
+        mongodb.user: "debezium",
+        mongodb.password: "dbz",
+        mongodb.task.id: "1",
+        database.whitelist: "inventory",
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+    ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker pull debezium/example-mongodb:0.10
+    $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+    ```
+     Use the following commands to initialize the data.
+    
+     ``` bash
+     ./usr/local/bin/init-inventory.sh
+     ```
+     If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+     
+     Make sure the nar file is available at `connectors/pulsar-io-mongodb-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-mongodb-{{pulsar:version}}.nar \
+        --name debezium-mongodb-source \
+        --destination-topic-name debezium-mongodb-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+        ```
+   
+   * Use the **YAML** configuration file as shown previously.
+      
+        ```bash
+        $ bin/pulsar-admin source localrun  \
+        --source-config-file debezium-mongodb-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+    ```
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a MongoDB client in docker.
+   
+    ```bash
+    $ docker exec -it pulsar-mongodb /bin/bash
+    ```
+
+6. A MongoDB client pops out. 
+   
+    ```bash
+    mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+    db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+    ```
+
+    In the terminal window of subscribing topic, you can receive the following messages.
+        
+    ```bash
+    ----- got message -----
+    {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type [...]
+    ```
+   
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+``` 
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+max.queue.size=
+```
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/io-cdc.md b/site2/website/versioned_docs/version-2.7.0/io-cdc.md
new file mode 100644
index 0000000..ec5fd3c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-cdc.md
@@ -0,0 +1,26 @@
+---
+id: version-2.7.0-io-cdc
+title: CDC connector
+sidebar_label: CDC connector
+original_id: io-cdc
+---
+
+CDC source connectors capture log changes of databases (such as MySQL, MongoDB, and PostgreSQL) into Pulsar.
+
+> CDC source connectors are built on top of [Canal](https://github.com/alibaba/canal) and [Debezium](https://debezium.io/) and store all data into Pulsar cluster in a persistent, replicated, and partitioned way.
+
+Currently, Pulsar has the following CDC connectors.
+
+Name|Java Class
+|---|---
+[Canal source connector](io-canal-source.md)|[org.apache.pulsar.io.canal.CanalStringSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/canal/src/main/java/org/apache/pulsar/io/canal/CanalStringSource.java)
+[Debezium source connector](io-cdc-debezium.md)|<li>[org.apache.pulsar.io.debezium.DebeziumSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/core/src/main/java/org/apache/pulsar/io/debezium/DebeziumSource.java)<br/><li>[org.apache.pulsar.io.debezium.mysql.DebeziumMysqlSource.java](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/java/org/apache/pulsar/io/debezium/mysql/DebeziumMysqlSource.java)<br/><li>[org.apache.pulsar.io.debeziu [...]
+
+For more information about Canal and Debezium, see the information below.
+
+Subject | Reference
+|---|---
+How to use Canal source connector with MySQL|[Canal guide](https://github.com/alibaba/canal/wiki)
+How does Canal work | [Canal tutorial](https://github.com/alibaba/canal/wiki)
+How to use Debezium source connector with MySQL | [Debezium guide](https://debezium.io/docs/connectors/mysql/)
+How does Debezium work | [Debezium tutorial](https://debezium.io/docs/tutorial/)
diff --git a/site2/website/versioned_docs/version-2.7.0/io-debezium-source.md b/site2/website/versioned_docs/version-2.7.0/io-debezium-source.md
new file mode 100644
index 0000000..1abdece
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-debezium-source.md
@@ -0,0 +1,496 @@
+---
+id: version-2.7.0-io-debezium-source
+title: Debezium source connector
+sidebar_label: Debezium source connector
+original_id: io-debezium-source
+---
+
+The Debezium source connector pulls messages from MySQL or PostgreSQL 
+and persists the messages to Pulsar topics.
+
+## Configuration 
+
+The configuration of Debezium source connector has the following properties.
+
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `task.class` | true | null | A source task class that implemented in Debezium. |
+| `database.hostname` | true | null | The address of a database server. |
+| `database.port` | true | null | The port number of a database server.|
+| `database.user` | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | null | The password for a database user that has the required privileges. |
+| `database.server.id` | true | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the  connector.<br/><br/> This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | true | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | true | null | The converter provided by Kafka Connect to convert record value.  |
+| `database.history` | true | null | The name of the database history class. |
+| `database.history.pulsar.topic` | true | null | The name of the database history topic where the connector writes and recovers DDL statements. <br/><br/>**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | true | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | true | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | true | null | Record the last committed offsets that the connector successfully completes. |
+| `json-with-envelope` | false | false | Present the message only consist of payload.
+
+### Converter Options
+
+1. org.apache.kafka.connect.json.JsonConverter
+
+This config `json-with-envelope` is valid only for the JsonConverter. It's default value is false, the consumer use the schema `
+Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`,
+and the message only consist of payload.
+
+If the config `json-with-envelope` value is true, the consumer use the schema 
+`Schema.KeyValue(Schema.BYTES, Schema.BYTES`, the message consist of schema and payload.
+
+2. org.apache.pulsar.kafka.shade.io.confluent.connect.avro.AvroConverter
+
+If users select the AvroConverter, then the pulsar consumer should use the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), 
+Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message consist of payload.
+
+### MongoDB Configuration
+| Name | Required | Default | Description |
+|------|----------|---------|-------------|
+| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+
+
+
+## Example of MySQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration 
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+    ```json
+    {
+        "database.hostname": "localhost",
+        "database.port": "3306",
+        "database.user": "debezium",
+        "database.password": "dbz",
+        "database.server.id": "184054",
+        "database.server.name": "dbserver1",
+        "database.whitelist": "inventory",
+        "database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory",
+        "database.history.pulsar.topic": "history-topic",
+        "database.history.pulsar.service.url": "pulsar://127.0.0.1:6650",
+        "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+        "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650",
+        "offset.storage.topic": "offset-topic"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-mysql-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mysql/src/main/resources/debezium-mysql-source-config.yaml) below to the `debezium-mysql-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-mysql-source"
+    topicName: "debezium-mysql-topic"
+    archive: "connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for mysql, docker image: debezium/example-mysql:0.8
+        database.hostname: "localhost"
+        database.port: "3306"
+        database.user: "debezium"
+        database.password: "dbz"
+        database.server.id: "184054"
+        database.server.name: "dbserver1"
+        database.whitelist: "inventory"
+        database.history: "org.apache.pulsar.io.debezium.PulsarDatabaseHistory"
+        database.history.pulsar.topic: "history-topic"
+        database.history.pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+        ## KEY_CONVERTER_CLASS_CONFIG, VALUE_CONVERTER_CLASS_CONFIG
+        key.converter: "org.apache.kafka.connect.json.JsonConverter"
+        value.converter: "org.apache.kafka.connect.json.JsonConverter"
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+
+        ## OFFSET_STORAGE_TOPIC_CONFIG
+        offset.storage.topic: "offset-topic"
+    ```
+
+### Usage
+
+This example shows how to change the data of a MySQL table using the Pulsar Debezium connector.
+
+1. Start a MySQL server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker run -it --rm \
+    --name mysql \
+    -p 3306:3306 \
+    -e MYSQL_ROOT_PASSWORD=debezium \
+    -e MYSQL_USER=mysqluser \
+    -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.8
+    ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+     * Use the **JSON** configuration file as shown previously. 
+   
+        Make sure the nar file is available at `connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-mysql-{{pulsar:version}}.nar \
+        --name debezium-mysql-source --destination-topic-name debezium-mysql-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"database.hostname": "localhost","database.port": "3306","database.user": "debezium","database.password": "dbz","database.server.id": "184054","database.server.name": "dbserver1","database.whitelist": "inventory","database.history": "org.apache.pulsar.io.debezium.PulsarDatabaseHistory","database.history.pulsar.topic": "history-topic","database.history.pulsar.service.url": "pulsar://127.0.0.1:6650","key.converter": "org.apache.kafka.connect.json.JsonConverter","v [...]
+        ```
+
+    * Use the **YAML** configuration file as shown previously.
+  
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --source-config-file debezium-mysql-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the table _inventory.products_.
+
+    ```bash
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a MySQL client in docker.
+
+    ```bash
+    $ docker run -it --rm \
+    --name mysqlterm \
+    --link mysql \
+    --rm mysql:5.7 sh \
+    -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+    ```
+
+6. A MySQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+    ```
+    mysql> use inventory;
+    mysql> show tables;
+    mysql> SELECT * FROM  products;
+    mysql> UPDATE products SET name='1111111111' WHERE id=101;
+    mysql> UPDATE products SET name='1111111111' WHERE id=107;
+    ```
+
+    In the terminal window of subscribing topic, you can find the data changes have been kept in the _sub-products_ topic.
+
+## Example of PostgreSQL
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+### Configuration
+
+You can use one of the following methods to create a configuration file.
+
+* JSON 
+
+    ```json
+    {
+        "database.hostname": "localhost",
+        "database.port": "5432",
+        "database.user": "postgres",
+        "database.password": "postgres",
+        "database.dbname": "postgres",
+        "database.server.name": "dbserver1",
+        "schema.whitelist": "inventory",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-postgres-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/postgres/src/main/resources/debezium-postgres-source-config.yaml) below to the `debezium-postgres-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-postgres-source"
+    topicName: "debezium-postgres-topic"
+    archive: "connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for pg, docker image: debezium/example-postgress:0.8
+        database.hostname: "localhost"
+        database.port: "5432"
+        database.user: "postgres"
+        database.password: "postgres"
+        database.dbname: "postgres"
+        database.server.name: "dbserver1"
+        schema.whitelist: "inventory"
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+    ```
+
+### Usage
+
+This example shows how to change the data of a PostgreSQL table using the Pulsar Debezium connector.
+
+
+1. Start a PostgreSQL server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker pull debezium/example-postgres:0.8
+    $ docker run -d -it --rm --name pulsar-postgresql -p 5432:5432  debezium/example-postgres:0.8
+    ```
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+     
+     Make sure the nar file is available at `connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-postgres-{{pulsar:version}}.nar \
+        --name debezium-postgres-source \
+        --destination-topic-name debezium-postgres-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"database.hostname": "localhost","database.port": "5432","database.user": "postgres","database.password": "postgres","database.dbname": "postgres","database.server.name": "dbserver1","schema.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+        ```
+   
+   * Use the **YAML** configuration file as shown previously.
+      
+        ```bash
+        $ bin/pulsar-admin source localrun  \
+        --source-config-file debezium-postgres-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+    ```
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a PostgreSQL client in docker.
+   
+    ```bash
+    $ docker exec -it pulsar-postgresql /bin/bash
+    ```
+
+6. A PostgreSQL client pops out. 
+   
+   Use the following commands to change the data of the table _products_.
+
+    ```
+    psql -U postgres postgres
+    postgres=# \c postgres;
+    You are now connected to database "postgres" as user "postgres".
+    postgres=# SET search_path TO inventory;
+    SET
+    postgres=# select * from products;
+     id  |        name        |                       description                       | weight
+    -----+--------------------+---------------------------------------------------------+--------
+     102 | car battery        | 12V car battery                                         |    8.1
+     103 | 12-pack drill bits | 12-pack of drill bits with sizes ranging from #40 to #3 |    0.8
+     104 | hammer             | 12oz carpenter's hammer                                 |   0.75
+     105 | hammer             | 14oz carpenter's hammer                                 |  0.875
+     106 | hammer             | 16oz carpenter's hammer                                 |      1
+     107 | rocks              | box of assorted rocks                                   |    5.3
+     108 | jacket             | water resistent black wind breaker                      |    0.1
+     109 | spare tire         | 24 inch spare tire                                      |   22.2
+     101 | 1111111111         | Small 2-wheel scooter                                   |   3.14
+    (9 rows)
+    
+    postgres=# UPDATE products SET name='1111111111' WHERE id=107;
+    UPDATE 1
+    ```
+
+    In the terminal window of subscribing topic, you can receive the following messages.
+        
+    ```bash
+    ----- got message -----
+    {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":107}}�{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"description"},{"type":"double","optional":true,"field":"weight"}],"optional":true,"name":"dbserver1.inventory.products [...]
+    ```
+## Example of MongoDB
+
+You need to create a configuration file before using the Pulsar Debezium connector.
+
+* JSON 
+
+    ```json
+    {
+        "mongodb.hosts": "rs0/mongodb:27017",
+        "mongodb.name": "dbserver1",
+        "mongodb.user": "debezium",
+        "mongodb.password": "dbz",
+        "mongodb.task.id": "1",
+        "database.whitelist": "inventory",
+        "pulsar.service.url": "pulsar://127.0.0.1:6650"
+    }
+    ```
+
+* YAML 
+
+    You can create a `debezium-mongodb-source-config.yaml` file and copy the [contents](https://github.com/apache/pulsar/blob/master/pulsar-io/debezium/mongodb/src/main/resources/debezium-mongodb-source-config.yaml) below to the `debezium-mongodb-source-config.yaml` file.
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "debezium-mongodb-source"
+    topicName: "debezium-mongodb-topic"
+    archive: "connectors/pulsar-io-debezium-mongodb-{{pulsar:version}}.nar"
+    parallelism: 1
+
+    configs:
+
+        ## config for pg, docker image: debezium/example-mongodb:0.10
+        mongodb.hosts: "rs0/mongodb:27017",
+        mongodb.name: "dbserver1",
+        mongodb.user: "debezium",
+        mongodb.password: "dbz",
+        mongodb.task.id: "1",
+        database.whitelist: "inventory",
+
+        ## PULSAR_SERVICE_URL_CONFIG
+        pulsar.service.url: "pulsar://127.0.0.1:6650"
+    ```
+
+### Usage
+
+This example shows how to change the data of a MongoDB table using the Pulsar Debezium connector.
+
+
+1. Start a MongoDB server with a database from which Debezium can capture changes.
+
+    ```bash
+    $ docker pull debezium/example-mongodb:0.10
+    $ docker run -d -it --rm --name pulsar-mongodb -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -p 27017:27017  debezium/example-mongodb:0.10
+    ```
+     Use the following commands to initialize the data.
+    
+     ``` bash
+     ./usr/local/bin/init-inventory.sh
+     ```
+     If the local host cannot access the container network, you can update the file ```/etc/hosts``` and add a rule ```127.0.0.1 6 f114527a95f```. f114527a95f is container id, you can try to get by ```docker ps -a```
+
+
+2. Start a Pulsar service locally in standalone mode.
+
+    ```bash
+    $ bin/pulsar standalone
+    ```
+
+3. Start the Pulsar Debezium connector in local run mode using one of the following methods.
+
+   * Use the **JSON** configuration file as shown previously. 
+     
+     Make sure the nar file is available at `connectors/pulsar-io-mongodb-{{pulsar:version}}.nar`.
+
+        ```bash
+        $ bin/pulsar-admin source localrun \
+        --archive connectors/pulsar-io-debezium-mongodb-{{pulsar:version}}.nar \
+        --name debezium-mongodb-source \
+        --destination-topic-name debezium-mongodb-topic \
+        --tenant public \
+        --namespace default \
+        --source-config '{"mongodb.hosts": "rs0/mongodb:27017","mongodb.name": "dbserver1","mongodb.user": "debezium","mongodb.password": "dbz","mongodb.task.id": "1","database.whitelist": "inventory","pulsar.service.url": "pulsar://127.0.0.1:6650"}'
+        ```
+   
+   * Use the **YAML** configuration file as shown previously.
+      
+        ```bash
+        $ bin/pulsar-admin source localrun  \
+        --source-config-file debezium-mongodb-source-config.yaml
+        ```
+
+4. Subscribe the topic _sub-products_ for the _inventory.products_ table.
+
+    ```
+    $ bin/pulsar-client consume -s "sub-products" public/default/dbserver1.inventory.products -n 0
+    ```
+
+5. Start a MongoDB client in docker.
+   
+    ```bash
+    $ docker exec -it pulsar-mongodb /bin/bash
+    ```
+
+6. A MongoDB client pops out. 
+   
+    ```bash
+    mongo -u debezium -p dbz --authenticationDatabase admin localhost:27017/inventory
+    db.products.update({"_id":NumberLong(104)},{$set:{weight:1.25}})
+    ```
+
+    In the terminal window of subscribing topic, you can receive the following messages.
+        
+    ```bash
+    ----- got message -----
+    {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.products.Key"},"payload":{"id":"104"}}, value = {"schema":{"type":"struct","fields":[{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"after"},{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1,"field":"patch"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type [...]
+    ```
+   
+## FAQ
+ 
+### Debezium postgres connector will hang when create snap
+
+```$xslt
+#18 prio=5 os_prio=31 tid=0x00007fd83096f800 nid=0xa403 waiting on condition [0x000070000f534000]
+    java.lang.Thread.State: WAITING (parking)
+     at sun.misc.Unsafe.park(Native Method)
+     - parking to wait for  <0x00000007ab025a58> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+     at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
+     at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
+     at java.util.concurrent.LinkedBlockingDeque.putLast(LinkedBlockingDeque.java:396)
+     at java.util.concurrent.LinkedBlockingDeque.put(LinkedBlockingDeque.java:649)
+     at io.debezium.connector.base.ChangeEventQueue.enqueue(ChangeEventQueue.java:132)
+     at io.debezium.connector.postgresql.PostgresConnectorTask$$Lambda$203/385424085.accept(Unknown Source)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.sendCurrentRecord(RecordsSnapshotProducer.java:402)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.readTable(RecordsSnapshotProducer.java:321)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$takeSnapshot$6(RecordsSnapshotProducer.java:226)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$$Lambda$240/1347039967.accept(Unknown Source)
+     at io.debezium.jdbc.JdbcConnection.queryWithBlockingConsumer(JdbcConnection.java:535)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.takeSnapshot(RecordsSnapshotProducer.java:224)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.lambda$start$0(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer$$Lambda$206/589332928.run(Unknown Source)
+     at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
+     at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:717)
+     at java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2010)
+     at io.debezium.connector.postgresql.RecordsSnapshotProducer.start(RecordsSnapshotProducer.java:87)
+     at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:126)
+     at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)
+     at org.apache.pulsar.io.kafka.connect.KafkaConnectSource.open(KafkaConnectSource.java:127)
+     at org.apache.pulsar.io.debezium.DebeziumSource.open(DebeziumSource.java:100)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupInput(JavaInstanceRunnable.java:690)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupJavaInstance(JavaInstanceRunnable.java:200)
+     at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:230)
+     at java.lang.Thread.run(Thread.java:748)
+``` 
+
+If you encounter the above problems in synchronizing data, please refer to [this](https://github.com/apache/pulsar/issues/4075) and add the following configuration to the configuration file:
+
+```$xslt
+max.queue.size=
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/io-debug.md b/site2/website/versioned_docs/version-2.7.0/io-debug.md
new file mode 100644
index 0000000..32a4ab9
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-debug.md
@@ -0,0 +1,329 @@
+---
+id: version-2.7.0-io-debug
+title: How to debug Pulsar connectors
+sidebar_label: Debug
+original_id: io-debug
+---
+This guide explains how to debug connectors in localrun or cluster mode and gives a debugging checklist.
+To better demonstrate how to debug Pulsar connectors, here takes a Mongo sink connector as an example.   
+
+**Deploy a Mongo sink environment**
+1. Start a Mongo service.
+    ```bash
+    docker pull mongo:4
+    docker run -d -p 27017:27017 --name pulsar-mongo -v $PWD/data:/data/db mongo:4
+    ```
+2. Create a DB and a collection.
+    ```bash
+    docker exec -it pulsar-mongo /bin/bash
+    mongo
+    > use pulsar
+    > db.createCollection('messages')
+    > exit
+    ```
+3. Start Pulsar standalone.
+    ```bash
+    docker pull apachepulsar/pulsar:2.4.0
+    docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --link pulsar-mongo --name pulsar-mongo-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone
+    ```
+4. Configure the Mongo sink with the `mongo-sink-config.yaml` file.
+    ```bash
+    configs:
+    mongoUri: "mongodb://pulsar-mongo:27017"
+    database: "pulsar"
+    collection: "messages"
+    batchSize: 2
+    batchTimeMs: 500
+    ```
+    ```bash
+    docker cp mongo-sink-config.yaml pulsar-mongo-standalone:/pulsar/
+    ```
+5. Download the Mongo sink nar package.
+    ```bash
+    docker exec -it pulsar-mongo-standalone /bin/bash
+    curl -O http://apache.01link.hk/pulsar/pulsar-2.4.0/connectors/pulsar-io-mongo-2.4.0.nar
+    ```
+## Debug in localrun mode
+Start the Mongo sink in localrun mode using the `localrun` command.
+> #### Tip
+> 
+> For more information about the `localrun` command, see [`localrun`](reference-connector-admin.md/#localrun-1).
+```bash
+./bin/pulsar-admin sinks localrun \
+--archive pulsar-io-mongo-2.4.0.nar \ 
+--tenant public --namespace default \
+--inputs test-mongo \
+--name pulsar-mongo-sink \
+--sink-config-file mongo-sink-config.yaml \
+--parallelism 1
+```
+### Use connector log
+Use one of the following methods to get a connector log in localrun mode:
+* After executing the `localrun` command, the **log is automatically printed on the console**.
+* The log is located at:
+  
+    ```bash
+    logs/functions/tenant/namespace/function-name/function-name-instance-id.log
+    ```
+    
+    **Example**
+    
+    The path of the Mongo sink connector is:
+    ```bash
+    logs/functions/public/default/pulsar-mongo-sink/pulsar-mongo-sink-0.log
+    ```
+To clearly explain the log information, here breaks down the large block of information into small blocks and add descriptions for each block.
+* This piece of log information shows the storage path of the nar package after decompression.
+    ```
+    08:21:54.132 [main] INFO  org.apache.pulsar.common.nar.NarClassLoader - Created class loader with paths: [file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/, file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/,
+    ```
+    > #### Tip
+    >
+    > If `class cannot be found` exception is thrown, check whether the nar file is decompressed in the folder `file:/tmp/pulsar-nar/pulsar-io-mongo-2.4.0.nar-unpacked/META-INF/bundled-dependencies/` or not.
+* This piece of log information illustrates the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, resources, and so on, which can be used to **check whether the Mongo sink connector is configured correctly or not**.
+    ```bash
+    08:21:55.390 [main] INFO  org.apache.pulsar.functions.runtime.ThreadRuntime - ThreadContainer starting function with instance config InstanceConfig(instanceId=0, functionId=853d60a1-0c48-44d5-9a5c-6917386476b2, functionVersion=c2ce1458-b69e-4175-88c0-a0a856a2be8c, functionDetails=tenant: "public"
+    namespace: "default"
+    name: "pulsar-mongo-sink"
+    className: "org.apache.pulsar.functions.api.utils.IdentityFunction"
+    autoAck: true
+    parallelism: 1
+    source {
+    typeClassName: "[B"
+    inputSpecs {
+        key: "test-mongo"
+        value {
+        }
+    }
+    cleanupSubscription: true
+    }
+    sink {
+    className: "org.apache.pulsar.io.mongodb.MongoSink"
+    configs: "{\"mongoUri\":\"mongodb://pulsar-mongo:27017\",\"database\":\"pulsar\",\"collection\":\"messages\",\"batchSize\":2,\"batchTimeMs\":500}"
+    typeClassName: "[B"
+    }
+    resources {
+    cpu: 1.0
+    ram: 1073741824
+    disk: 10737418240
+    }
+    componentType: SINK
+    , maxBufferedTuples=1024, functionAuthenticationSpec=null, port=38459, clusterName=local)
+    ```
+* This piece of log information demonstrates the status of the connections to Mongo and configuration information.
+    ```bash
+    08:21:56.231 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO  org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:8}] to pulsar-mongo:27017
+    08:21:56.326 [cluster-ClusterId{value='5d6396a3c9e77c0569ff00eb', description='null'}-pulsar-mongo:27017] INFO  org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=pulsar-mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 0]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=89058800}
+    ```
+* This piece of log information explains the configuration of consumers and clients, including the topic name, subscription name, subscription type, and so on.
+    ```bash
+    08:21:56.719 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Starting Pulsar consumer status recorder with config: {
+    "topicNames" : [ "test-mongo" ],
+    "topicsPattern" : null,
+    "subscriptionName" : "public/default/pulsar-mongo-sink",
+    "subscriptionType" : "Shared",
+    "receiverQueueSize" : 1000,
+    "acknowledgementsGroupTimeMicros" : 100000,
+    "negativeAckRedeliveryDelayMicros" : 60000000,
+    "maxTotalReceiverQueueSizeAcrossPartitions" : 50000,
+    "consumerName" : null,
+    "ackTimeoutMillis" : 0,
+    "tickDurationMillis" : 1000,
+    "priorityLevel" : 0,
+    "cryptoFailureAction" : "CONSUME",
+    "properties" : {
+        "application" : "pulsar-sink",
+        "id" : "public/default/pulsar-mongo-sink",
+        "instance_id" : "0"
+    },
+    "readCompacted" : false,
+    "subscriptionInitialPosition" : "Latest",
+    "patternAutoDiscoveryPeriod" : 1,
+    "regexSubscriptionMode" : "PersistentOnly",
+    "deadLetterPolicy" : null,
+    "autoUpdatePartitions" : true,
+    "replicateSubscriptionState" : false,
+    "resetIncludeHead" : false
+    }
+    08:21:56.726 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: {
+    "serviceUrl" : "pulsar://localhost:6650",
+    "authPluginClassName" : null,
+    "authParams" : null,
+    "operationTimeoutMs" : 30000,
+    "statsIntervalSeconds" : 60,
+    "numIoThreads" : 1,
+    "numListenerThreads" : 1,
+    "connectionsPerBroker" : 1,
+    "useTcpNoDelay" : true,
+    "useTls" : false,
+    "tlsTrustCertsFilePath" : null,
+    "tlsAllowInsecureConnection" : false,
+    "tlsHostnameVerificationEnable" : false,
+    "concurrentLookupRequest" : 5000,
+    "maxLookupRequest" : 50000,
+    "maxNumberOfRejectedRequestPerConnection" : 50,
+    "keepAliveIntervalSeconds" : 30,
+    "connectionTimeoutMs" : 10000,
+    "requestTimeoutMs" : 60000,
+    "defaultBackoffIntervalNanos" : 100000000,
+    "maxBackoffIntervalNanos" : 30000000000
+    }
+    ```
+## Debug in cluster mode
+You can use the following methods to debug a connector in cluster mode:
+* [Use connector log](#use-connector-log)
+* [Use admin CLI](#use-admin-cli)
+### Use connector log
+In cluster mode, multiple connectors can run on a worker. To find the log path of a specified connector, use the `workerId` to locate the connector log.
+### Use admin CLI
+Pulsar admin CLI helps you debug Pulsar connectors with the following subcommands:
+* [`get`](#get)
+  
+* [`status`](#status)
+* [`topics stats`](#topics-stats)  
+
+**Create a Mongo sink**
+```bash
+./bin/pulsar-admin sinks create \
+--archive pulsar-io-mongo-2.4.0.nar \
+--tenant public \
+--namespace default \
+--inputs test-mongo \
+--name pulsar-mongo-sink \
+--sink-config-file mongo-sink-config.yaml \
+--parallelism 1
+```
+### `get`
+Use the `get` command to get the basic information about the Mongo sink connector, such as tenant, namespace, name, parallelism, and so on.
+```bash
+./bin/pulsar-admin sinks get --tenant public --namespace default  --name pulsar-mongo-sink
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "pulsar-mongo-sink",
+  "className": "org.apache.pulsar.io.mongodb.MongoSink",
+  "inputSpecs": {
+    "test-mongo": {
+      "isRegexPattern": false
+    }
+  },
+  "configs": {
+    "mongoUri": "mongodb://pulsar-mongo:27017",
+    "database": "pulsar",
+    "collection": "messages",
+    "batchSize": 2.0,
+    "batchTimeMs": 500.0
+  },
+  "parallelism": 1,
+  "processingGuarantees": "ATLEAST_ONCE",
+  "retainOrdering": false,
+  "autoAck": true
+}
+```
+> #### Tip
+> 
+> For more information about the `get` command, see [`get`](reference-connector-admin.md/#get-1).
+### `status`
+Use the `status` command to get the current status about the Mongo sink connector, such as the number of instance, the number of running instance, instanceId, workerId and so on.
+```bash
+./bin/pulsar-admin sinks status 
+--tenant public \
+--namespace default  \
+--name pulsar-mongo-sink
+{
+"numInstances" : 1,
+"numRunning" : 1,
+"instances" : [ {
+    "instanceId" : 0,
+    "status" : {
+    "running" : true,
+    "error" : "",
+    "numRestarts" : 0,
+    "numReadFromPulsar" : 0,
+    "numSystemExceptions" : 0,
+    "latestSystemExceptions" : [ ],
+    "numSinkExceptions" : 0,
+    "latestSinkExceptions" : [ ],
+    "numWrittenToSink" : 0,
+    "lastReceivedTime" : 0,
+    "workerId" : "c-standalone-fw-5d202832fd18-8080"
+    }
+} ]
+}
+```
+> #### Tip
+> 
+> For more information about the `status` command, see [`status`](reference-connector-admin.md/#stauts-1).
+> 
+> If there are multiple connectors running on a worker, `workerId` can locate the worker on which the specified connector is running.
+### `topics stats`
+Use the `topics stats` command to get the stats for a topic and its connected producer and consumer, such as whether the topic has received messages or not, whether there is a backlog of messages or not, the available permits and other key information. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period.
+```bash
+./bin/pulsar-admin topics stats test-mongo
+{
+  "msgRateIn" : 0.0,
+  "msgThroughputIn" : 0.0,
+  "msgRateOut" : 0.0,
+  "msgThroughputOut" : 0.0,
+  "averageMsgSize" : 0.0,
+  "storageSize" : 1,
+  "publishers" : [ ],
+  "subscriptions" : {
+    "public/default/pulsar-mongo-sink" : {
+      "msgRateOut" : 0.0,
+      "msgThroughputOut" : 0.0,
+      "msgRateRedeliver" : 0.0,
+      "msgBacklog" : 0,
+      "blockedSubscriptionOnUnackedMsgs" : false,
+      "msgDelayed" : 0,
+      "unackedMessages" : 0,
+      "type" : "Shared",
+      "msgRateExpired" : 0.0,
+      "consumers" : [ {
+        "msgRateOut" : 0.0,
+        "msgThroughputOut" : 0.0,
+        "msgRateRedeliver" : 0.0,
+        "consumerName" : "dffdd",
+        "availablePermits" : 999,
+        "unackedMessages" : 0,
+        "blockedConsumerOnUnackedMsgs" : false,
+        "metadata" : {
+          "instance_id" : "0",
+          "application" : "pulsar-sink",
+          "id" : "public/default/pulsar-mongo-sink"
+        },
+        "connectedSince" : "2019-08-26T08:48:07.582Z",
+        "clientVersion" : "2.4.0",
+        "address" : "/172.17.0.3:57790"
+      } ],
+      "isReplicated" : false
+    }
+  },
+  "replication" : { },
+  "deduplicationStatus" : "Disabled"
+}
+```
+> #### Tip
+> 
+> For more information about the `topic stats` command, see [`topic stats`](http://pulsar.apache.org/docs/en/pulsar-admin/#stats-1).
+## Checklist
+This checklist indicates the major areas to check when you debug connectors. It is a reminder of what to look for to ensure a thorough review and an evaluation tool to get the status of connectors. 
+* Does Pulsar start successfully?
+  
+* Does the external service run normally?
+  
+* Is the nar package complete?
+  
+* Is the connector configuration file correct?
+  
+* In localrun mode, run a connector and check the printed information (connector log) on the console.
+  
+* In cluster mode:
+  
+   * Use the `get` command to get the basic information.
+  
+   * Use the `status` command to get the current status.
+   * Use the `topics stats` command to get the stats for a specified topic and its connected producers and consumers.
+  
+   * Check the connector log.
+* Enter into the external system and verify the result.
diff --git a/site2/website/versioned_docs/version-2.7.0/io-develop.md b/site2/website/versioned_docs/version-2.7.0/io-develop.md
new file mode 100644
index 0000000..f4abc96
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-develop.md
@@ -0,0 +1,240 @@
+---
+id: version-2.7.0-io-develop
+title: How to develop Pulsar connectors
+sidebar_label: Develop
+original_id: io-develop
+---
+
+This guide describes how to develop Pulsar connectors to move data
+between Pulsar and other systems. 
+
+Pulsar connectors are special [Pulsar Functions](functions-overview.md), so creating
+a Pulsar connector is similar to creating a Pulsar function. 
+
+Pulsar connectors come in two types: 
+
+| Type | Description | Example
+|---|---|---
+{@inject: github:`Source`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}|Import data from another system to Pulsar.|[RabbitMQ source connector](io-rabbitmq.md) imports the messages of a RabbitMQ queue to a Pulsar topic.
+{@inject: github:`Sink`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java}|Export data from Pulsar to another system.|[Kinesis sink connector](io-kinesis.md) exports the messages of a Pulsar topic to a Kinesis stream.
+
+## Develop
+
+You can develop Pulsar source connectors and sink connectors.
+
+### Source
+
+Developing a source connector is to implement the {@inject: github:`Source`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java}
+interface, which means you need to implement the {@inject: github:`open`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method and the {@inject: github:`read`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method.
+
+1. Implement the {@inject: github:`open`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method. 
+
+    ```java
+    /**
+    * Open connector with configuration
+    *
+    * @param config initialization config
+    * @param sourceContext
+    * @throws Exception IO type exceptions when opening a connector
+    */
+    void open(final Map<String, Object> config, SourceContext sourceContext) throws Exception;
+    ```
+
+    This method is called when the source connector is initialized. 
+
+    In this method, you can retrieve all connector specific settings through the passed-in `config` parameter and initialize all necessary resources. 
+    
+    For example, a Kafka connector can create a Kafka client in this `open` method.
+
+    Besides, Pulsar runtime also provides a `SourceContext` for the 
+    connector to access runtime resources for tasks like collecting metrics. The implementation can save the `SourceContext` for future use.
+
+2. Implement the {@inject: github:`read`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Source.java} method.
+
+    ```java
+        /**
+        * Reads the next message from source.
+        * If source does not have any new messages, this call should block.
+        * @return next message from source.  The return result should never be null
+        * @throws Exception
+        */
+        Record<T> read() throws Exception;
+    ```
+
+    If nothing to return, the implementation should be blocking rather than returning `null`. 
+
+    The returned {@inject: github:`Record`:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should encapsulate the following information, which is needed by Pulsar IO runtime. 
+
+    * {@inject: github:`Record`:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following variables:
+
+      |Variable|Required|Description
+      |---|---|---
+      `TopicName`|No|Pulsar topic name from which the record is originated from.
+      `Key`|No| Messages can optionally be tagged with keys.<br/><br/>For more information, see [Routing modes](concepts-messaging.md#routing-modes).|
+      `Value`|Yes|Actual data of the record.
+      `EventTime`|No|Event time of the record from the source.
+      `PartitionId`|No| If the record is originated from a partitioned source, it returns its `PartitionId`. <br/><br/>`PartitionId` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee.
+      `RecordSequence`|No|If the record is originated from a sequential source, it returns its `RecordSequence`.<br/><br/>`RecordSequence` is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee.
+      `Properties` |No| If the record carries user-defined properties, it returns those properties.
+      `DestinationTopic`|No|Topic to which message should be written.
+      `Message`|No|A class which carries data sent by users.<br/><br/>For more information, see [Message.java](https://github.com/apache/pulsar/blob/master/pulsar-client-api/src/main/java/org/apache/pulsar/client/api/Message.java).|
+
+     * {@inject: github:`Record`:/pulsar-functions/api-java/src/main/java/org/apache/pulsar/functions/api/Record.java} should provide the following methods:
+
+        Method|Description
+        |---|---
+        `ack` |Acknowledge that the record is fully processed.
+        `fail`|Indicate that the record fails to be processed.
+
+> #### Tip
+>
+> For more information about **how to create a source connector**, see {@inject: github:`KafkaSource`:/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java}.
+
+### Sink
+
+Developing a sink connector **is similar to** developing a source connector, that is, you need to implement the {@inject: github:`Sink`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} interface, which means implementing the {@inject: github:`open`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method and the {@inject: github:`write`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method.
+
+1. Implement the {@inject: github:`open`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method.
+
+    ```java
+        /**
+        * Open connector with configuration
+        *
+        * @param config initialization config
+        * @param sinkContext
+        * @throws Exception IO type exceptions when opening a connector
+        */
+        void open(final Map<String, Object> config, SinkContext sinkContext) throws Exception;
+    ```
+
+2. Implement the {@inject: github:`write`:/pulsar-io/core/src/main/java/org/apache/pulsar/io/core/Sink.java} method.
+
+    ```java
+        /**
+        * Write a message to Sink
+        * @param record record to write to sink
+        * @throws Exception
+        */
+        void write(Record<T> record) throws Exception;
+    ```
+
+    During the implementation, you can decide how to write the `Value` and
+    the `Key` to the actual source, and leverage all the provided information such as
+    `PartitionId` and `RecordSequence` to achieve different processing guarantees. 
+
+    You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send). 
+
+## Test
+
+Testing connectors can be challenging because Pulsar IO connectors interact with two systems
+that may be difficult to mock—Pulsar and the system to which the connector is connecting. 
+
+It is
+recommended writing special tests to test the connector functionalities as below
+while mocking the external service. 
+
+### Unit test
+
+You can create unit tests for your connector.
+
+### Integration test
+
+Once you have written sufficient unit tests, you can add
+separate integration tests to verify end-to-end functionality. 
+
+Pulsar uses
+[testcontainers](https://www.testcontainers.org/) **for all integration tests**. 
+
+> #### Tip
+>
+>For more information about **how to create integration tests for Pulsar connectors**, see {@inject: github:`IntegrationTests`:/tests/integration/src/test/java/org/apache/pulsar/tests/integration/io}.
+
+## Package
+
+Once you've developed and tested your connector, you need to package it so that it can be submitted
+to a [Pulsar Functions](functions-overview.md) cluster. 
+
+There are two methods to
+work with Pulsar Functions' runtime, that is, [NAR](#nar) and [uber JAR](#uber-jar).
+
+> #### Note
+> 
+> If you plan to package and distribute your connector for others to use, you are obligated to
+license and copyright your own code properly. Remember to add the license and copyright to
+all libraries your code uses and to your distribution. 
+>
+> If you use the [NAR](#nar) method, the NAR plugin 
+automatically creates a `DEPENDENCIES` file in the generated NAR package, including the proper
+licensing and copyrights of all libraries of your connector.
+
+### NAR 
+
+**NAR** stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide
+a bit of Java ClassLoader isolation. 
+
+> #### Tip
+> 
+> For more information about **how NAR works**, see
+> [here](https://medium.com/hashmapinc/nifi-nar-files-explained-14113f7796fd). 
+
+Pulsar uses the same mechanism for packaging **all** [built-in connectors](io-connectors). 
+
+The easiest approach to package a Pulsar connector is to create a NAR package using
+[nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin).
+
+Include this [nifi-nar-maven-plugin](https://mvnrepository.com/artifact/org.apache.nifi/nifi-nar-maven-plugin) in your maven project for your connector as below. 
+
+```xml
+<plugins>
+  <plugin>
+    <groupId>org.apache.nifi</groupId>
+    <artifactId>nifi-nar-maven-plugin</artifactId>
+    <version>1.2.0</version>
+  </plugin>
+</plugins>
+```
+
+You must also create a `resources/META-INF/services/pulsar-io.yaml` file with the following contents:
+
+```yaml
+name: connector name
+description: connector description
+sourceClass: fully qualified class name (only if source connector)
+sinkClass: fully qualified class name (only if sink connector)
+```
+
+If you are using the [Gradle NiFi plugin](https://github.com/sponiro/gradle-nar-plugin) you might need to create a directive to ensure your pulsar-io.yaml is [copied into the NAR file correctly](https://github.com/sponiro/gradle-nar-plugin/issues/5).
+
+> #### Tip
+> 
+> For more information about an **how to use NAR for Pulsar connectors**, see {@inject: github:`TwitterFirehose`:/pulsar-io/twitter/pom.xml}.
+
+### Uber JAR
+
+An alternative approach is to create an **uber JAR** that contains all of the connector's JAR files
+and other resource files. No directory internal structure is necessary.
+
+You can use [maven-shade-plugin](https://maven.apache.org/plugins/maven-shade-plugin/examples/includes-excludes.html) to create a uber JAR as below:
+
+```xml
+<plugin>
+  <groupId>org.apache.maven.plugins</groupId>
+  <artifactId>maven-shade-plugin</artifactId>
+  <version>3.1.1</version>
+  <executions>
+    <execution>
+      <phase>package</phase>
+      <goals>
+        <goal>shade</goal>
+      </goals>
+      <configuration>
+        <filters>
+          <filter>
+            <artifact>*:*</artifact>
+          </filter>
+        </filters>
+      </configuration>
+    </execution>
+  </executions>
+</plugin>
+```
diff --git a/site2/website/versioned_docs/version-2.7.0/io-dynamodb-source.md b/site2/website/versioned_docs/version-2.7.0/io-dynamodb-source.md
new file mode 100644
index 0000000..e958890
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-dynamodb-source.md
@@ -0,0 +1,76 @@
+---
+id: version-2.7.0-io-dynamodb-source
+title: AWS DynamoDB source connector
+sidebar_label: AWS DynamoDB source connector
+original_id: io-dynamodb-source
+---
+
+The DynamoDB source connector pulls data from DynamoDB table streams and persists data into Pulsar.
+
+This connector uses the [DynamoDB Streams Kinesis Adapter](https://github.com/awslabs/dynamodb-streams-kinesis-adapter),
+which uses the [Kinesis Consumer Library](https://github.com/awslabs/amazon-kinesis-client) (KCL) to do the actual
+consuming of messages. The KCL uses DynamoDB to track state for consumers and requires cloudwatch access to log metrics.
+
+
+## Configuration
+
+The configuration of the DynamoDB source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`initialPositionInStream`|InitialPositionInStream|false|LATEST|The position where the connector starts from.<br/><br/>Below are the available options:<br/><br/><li>`AT_TIMESTAMP`: start from the record at or after the specified timestamp.<br/><br/><li>`LATEST`: start after the most recent data record.<br/><br/><li>`TRIM_HORIZON`: start from the oldest available data record.
+`startAtTime`|Date|false|" " (empty string)|If set to `AT_TIMESTAMP`, it specifies the point in time to start consumption.
+`applicationName`|String|false|Pulsar IO connector|The name of the KCL application.  Must be unique, as it is used to define the table name for the dynamo table used for state tracking. <br/><br/>By default, the application name is included in the user agent string used to make AWS requests. This can assist with troubleshooting, for example, distinguish requests made by separate connector instances.
+`checkpointInterval`|long|false|60000|The frequency of the KCL checkpoint in milliseconds.
+`backoffTime`|long|false|3000|The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds.
+`numRetries`|int|false|3|The number of re-attempts when the connector encounters an exception while trying to set a checkpoint.
+`receiveQueueSize`|int|false|1000|The maximum number of AWS records that can be buffered inside the connector. <br/><br/>Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed.
+`dynamoEndpoint`|String|false|" " (empty string)|The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`cloudwatchEndpoint`|String|false|" " (empty string)|The Cloudwatch end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsEndpoint`|String|false|" " (empty string)|The DynamoDB Streams end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+`awsRegion`|String|false|" " (empty string)|The AWS region. <br/><br/>**Example**<br/> us-west-1, us-west-2
+`awsDynamodbStreamArn`|String|true|" " (empty string)|The DynamoDB stream arn.
+`awsCredentialPluginName`|String|false|" " (empty string)|The fully-qualified class name of implementation of {@inject: github:`AwsCredentialProviderPlugin`:/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java}.<br><br>`awsCredentialProviderPlugin` has the following built-in plugs:<br><br><li>`org.apache.pulsar.io.kinesis.AwsDefaultProviderChainPlugin`:<br> this plugin uses the default AWS provider chain.<br>For more information, see [using the default c [...]
+`awsCredentialPluginParam`|String |false|" " (empty string)|The JSON parameter to initialize `awsCredentialsProviderPlugin`.
+
+### Example
+
+Before using the DynamoDB source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "awsEndpoint": "https://some.endpoint.aws",
+        "awsRegion": "us-east-1",
+        "awsDynamodbStreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291",
+        "awsCredentialPluginParam": "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}",
+        "applicationName": "My test application",
+        "checkpointInterval": "30000",
+        "backoffTime": "4000",
+        "numRetries": "3",
+        "receiveQueueSize": 2000,
+        "initialPositionInStream": "TRIM_HORIZON",
+        "startAtTime": "2019-03-05T19:28:58.000Z"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        awsEndpoint: "https://some.endpoint.aws"
+        awsRegion: "us-east-1"
+        awsDynamodbStreamArn: "arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291"
+        awsCredentialPluginParam: "{\"accessKey\":\"myKey\",\"secretKey\":\"my-Secret\"}"
+        applicationName: "My test application"
+        checkpointInterval: 30000
+        backoffTime: 4000
+        numRetries: 3
+        receiveQueueSize: 2000
+        initialPositionInStream: "TRIM_HORIZON"
+        startAtTime: "2019-03-05T19:28:58.000Z"
+    ```
+
diff --git a/site2/website/versioned_docs/version-2.7.0/io-elasticsearch-sink.md b/site2/website/versioned_docs/version-2.7.0/io-elasticsearch-sink.md
new file mode 100644
index 0000000..c67e7cb
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-elasticsearch-sink.md
@@ -0,0 +1,140 @@
+---
+id: version-2.7.0-io-elasticsearch-sink
+title: ElasticSearch sink connector
+sidebar_label: ElasticSearch sink connector
+original_id: io-elasticsearch-sink
+---
+
+The ElasticSearch sink connector pulls messages from Pulsar topics and persists the messages to indexes.
+
+## Configuration
+
+The configuration of the ElasticSearch sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `elasticSearchUrl` | String| true |" " (empty string)| The URL of elastic search cluster to which the connector connects. |
+| `indexName` | String| true |" " (empty string)| The index name to which the connector writes messages. |
+| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to. <br><br> The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. |
+| `indexNumberOfShards` | int| false |1| The number of shards of the index. |
+| `indexNumberOfReplicas` | int| false |1 | The number of replicas of the index. |
+| `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster. <br><br>If `username` is set, then `password` should also be provided. |
+| `password` | String| false | " " (empty string)|The password used by the connector to connect to the elastic search cluster. <br><br>If `username` is set, then `password` should also be provided.  |
+
+## Example
+
+Before using the ElasticSearch sink connector, you need to create a configuration file through one of the following methods.
+
+### Configuration
+
+#### For Elasticsearch After 6.2
+
+* JSON 
+
+    ```json
+    {
+        "elasticSearchUrl": "http://localhost:9200",
+        "indexName": "my_index",
+        "username": "scooby",
+        "password": "doobie"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        elasticSearchUrl: "http://localhost:9200"
+        indexName: "my_index"
+        username: "scooby"
+        password: "doobie"
+    ```
+
+#### For Elasticsearch Before 6.2
+
+* JSON 
+
+    ```json
+    {
+        "elasticSearchUrl": "http://localhost:9200",
+        "indexName": "my_index",
+        "typeName": "doc",
+        "username": "scooby",
+        "password": "doobie"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        elasticSearchUrl: "http://localhost:9200"
+        indexName: "my_index"
+        typeName: "doc"
+        username: "scooby"
+        password: "doobie"
+    ```
+
+### Usage
+
+1. Start a single node Elasticsearch cluster.
+
+    ```bash
+    $ docker run -p 9200:9200 -p 9300:9300 \
+        -e "discovery.type=single-node" \
+        docker.elastic.co/elasticsearch/elasticsearch:7.5.1
+    ```
+
+2. Start a Pulsar service locally in standalone mode.
+    ```bash
+    $ bin/pulsar standalone
+    ```
+    Make sure the nar file is available at `connectors/pulsar-io-elastic-search-{{pulsar:version}}.nar`.
+
+3. Start the Pulsar Elasticsearch connector in local run mode using one of the following methods.
+    * Use the **JSON** configuration as shown previously. 
+        ```bash
+        $ bin/pulsar-admin sinks localrun \
+            --archive connectors/pulsar-io-elastic-search-{{pulsar:version}}.nar \
+            --tenant public \
+            --namespace default \
+            --name elasticsearch-test-sink \
+            --sink-type elastic_search \
+            --sink-config '{"elasticSearchUrl":"http://localhost:9200","indexName": "my_index","username": "scooby","password": "doobie"}' \
+            --inputs elasticsearch_test
+        ```
+    * Use the **YAML** configuration file as shown previously.
+    
+        ```bash
+        $ bin/pulsar-admin sinks localrun \
+            --archive connectors/pulsar-io-elastic-search-{{pulsar:version}}.nar \
+            --tenant public \
+            --namespace default \
+            --name elasticsearch-test-sink \
+            --sink-type elastic_search \
+            --sink-config-file elasticsearch-sink.yml \
+            --inputs elasticsearch_test
+        ```
+
+4. Publish records to the topic.
+
+    ```bash
+    $ bin/pulsar-client produce elasticsearch_test --messages "{\"a\":1}"
+    ```
+
+5. Check documents in Elasticsearch.
+    
+    * refresh the index
+        ```bash
+            $ curl -s http://localhost:9200/my_index/_refresh
+        ``` 
+    * search documents
+        ```bash
+            $ curl -s http://localhost:9200/my_index/_search
+        ```
+        You can see the record that published earlier has been successfully written into Elasticsearch.
+        ```json
+        {"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"my_index","_type":"_doc","_id":"FSxemm8BLjG_iC0EeTYJ","_score":1.0,"_source":{"a":1}}]}}
+        ```
diff --git a/site2/website/versioned_docs/version-2.7.0/io-file-source.md b/site2/website/versioned_docs/version-2.7.0/io-file-source.md
new file mode 100644
index 0000000..6fe7f04
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-file-source.md
@@ -0,0 +1,138 @@
+---
+id: version-2.7.0-io-file-source
+title: File source connector
+sidebar_label: File source connector
+original_id: io-file-source
+---
+
+The File source connector pulls messages from files in directories and persists the messages to Pulsar topics.
+
+## Configuration
+
+The configuration of the File source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `inputDirectory` | String|true  | No default value|The input directory to pull files. |
+| `recurse` | Boolean|false | true | Whether to pull files from subdirectory or not.|
+| `keepFile` |Boolean|false | false | If set to true, the file is not deleted after it is processed, which means the file can be picked up continually. |
+| `fileFilter` | String|false| [^\\.].* | The file whose name matches the given regular expression is picked up. |
+| `pathFilter` | String |false | NULL | If `recurse` is set to true, the subdirectory whose path matches the given regular expression is scanned. |
+| `minimumFileAge` | Integer|false | 0 | The minimum age that a file can be processed. <br><br>Any file younger than `minimumFileAge` (according to the last modification date) is ignored. |
+| `maximumFileAge` | Long|false |Long.MAX_VALUE | The maximum age that a file can be processed. <br><br>Any file older than `maximumFileAge` (according to last modification date) is ignored. |
+| `minimumSize` |Integer| false |1 | The minimum size (in bytes) that a file can be processed. |
+| `maximumSize` | Double|false |Double.MAX_VALUE| The maximum size (in bytes) that a file can be processed. |
+| `ignoreHiddenFiles` |Boolean| false | true| Whether the hidden files should be ignored or not. |
+| `pollingInterval`|Long | false | 10000L | Indicates how long to wait before performing a directory listing. |
+| `numWorkers` | Integer | false | 1 | The number of worker threads that process files.<br><br> This allows you to process a larger number of files concurrently. <br><br>However, setting this to a value greater than 1 makes the data from multiple files mixed in the target topic. |
+
+### Example
+
+Before using the File source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "inputDirectory": "/Users/david",
+        "recurse": true,
+        "keepFile": true,
+        "fileFilter": "[^\\.].*",
+        "pathFilter": "*",
+        "minimumFileAge": 0,
+        "maximumFileAge": 9999999999,
+        "minimumSize": 1,
+        "maximumSize": 5000000,
+        "ignoreHiddenFiles": true,
+        "pollingInterval": 5000,
+        "numWorkers": 1
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        inputDirectory: "/Users/david"
+        recurse: true
+        keepFile: true
+        fileFilter: "[^\\.].*"
+        pathFilter: "*"
+        minimumFileAge: 0
+        maximumFileAge: 9999999999
+        minimumSize: 1
+        maximumSize: 5000000
+        ignoreHiddenFiles: true
+        pollingInterval: 5000
+        numWorkers: 1
+    ```
+
+## Usage
+
+Here is an example of using the File source connecter.
+
+1. Pull a Pulsar image.
+
+    ```bash
+    $ docker pull apachepulsar/pulsar:{version}
+    ```
+
+2. Start Pulsar standalone.
+   
+    ```bash
+    $ docker run -d -it -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-standalone apachepulsar/pulsar:{version} bin/pulsar standalone
+    ```
+
+3. Create a configuration file _file-connector.yaml_.
+
+    ```yaml
+    configs:
+        inputDirectory: "/opt"
+    ```
+
+4. Copy the configuration file _file-connector.yaml_ to the container.
+
+    ```bash
+    $ docker cp connectors/file-connector.yaml pulsar-standalone:/pulsar/
+    ```
+
+5. Download the File source connector.
+
+    ```bash
+    $ curl -O https://mirrors.tuna.tsinghua.edu.cn/apache/pulsar/pulsar-{version}/connectors/pulsar-io-file-{version}.nar
+    ```
+
+6. Start the File source connector.
+
+    ```bash
+    $ docker exec -it pulsar-standalone /bin/bash
+
+    $ ./bin/pulsar-admin sources localrun \
+    --archive /pulsar/pulsar-io-file-{version}.nar \
+    --name file-test \
+    --destination-topic-name  pulsar-file-test \
+    --source-config-file /pulsar/file-connector.yaml
+    ```
+
+7. Start a consumer.
+
+    ```bash
+    ./bin/pulsar-client consume -s file-test -n 0 pulsar-file-test
+    ```
+
+8. Write the message to the file _test.txt_.
+   
+    ```bash
+    echo "hello world!" > /opt/test.txt
+    ```
+
+    The following information appears on the consumer terminal window.
+
+    ```bash
+    ----- got message -----
+    hello world!
+    ```
+
+    
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/io-flume-sink.md b/site2/website/versioned_docs/version-2.7.0/io-flume-sink.md
new file mode 100644
index 0000000..7cd38ec
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-flume-sink.md
@@ -0,0 +1,52 @@
+---
+id: version-2.7.0-io-flume-sink
+title: Flume sink connector
+sidebar_label: Flume sink connector
+original_id: io-flume-sink
+---
+
+The Flume sink connector pulls messages from Pulsar topics to logs.
+
+## Configuration
+
+The configuration of the Flume sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume sink connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `sink.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/sink.conf).
+
+* JSON 
+
+    ```json
+    {
+        "name": "a1",
+        "confFile": "sink.conf",
+        "noReloadConf": "false",
+        "zkConnString": "",
+        "zkBasePath": ""
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        name: a1
+        confFile: sink.conf
+        noReloadConf: false
+        zkConnString: ""
+        zkBasePath: ""
+    ```
+
diff --git a/site2/website/versioned_docs/version-2.7.0/io-flume-source.md b/site2/website/versioned_docs/version-2.7.0/io-flume-source.md
new file mode 100644
index 0000000..e50cc38
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-flume-source.md
@@ -0,0 +1,52 @@
+---
+id: version-2.7.0-io-flume-source
+title: Flume source connector
+sidebar_label: Flume source connector
+original_id: io-flume-source
+---
+
+The Flume source connector pulls messages from logs to Pulsar topics.
+
+## Configuration
+
+The configuration of the Flume source connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+`name`|String|true|"" (empty string)|The name of the agent.
+`confFile`|String|true|"" (empty string)|The configuration file.
+`noReloadConf`|Boolean|false|false|Whether to reload configuration file if changed.
+`zkConnString`|String|true|"" (empty string)|The ZooKeeper connection.
+`zkBasePath`|String|true|"" (empty string)|The base path in ZooKeeper for agent configuration.
+
+### Example
+
+Before using the Flume source connector, you need to create a configuration file through one of the following methods.
+
+> For more information about the `source.conf` in the example below, see [here](https://github.com/apache/pulsar/blob/master/pulsar-io/flume/src/main/resources/flume/source.conf).
+
+* JSON 
+
+    ```json
+    {
+        "name": "a1",
+        "confFile": "source.conf",
+        "noReloadConf": "false",
+        "zkConnString": "",
+        "zkBasePath": ""
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        name: a1
+        confFile: source.conf
+        noReloadConf: false
+        zkConnString: ""
+        zkBasePath: ""
+    ```
+
diff --git a/site2/website/versioned_docs/version-2.7.0/io-hbase-sink.md b/site2/website/versioned_docs/version-2.7.0/io-hbase-sink.md
new file mode 100644
index 0000000..ede35147
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-hbase-sink.md
@@ -0,0 +1,64 @@
+---
+id: version-2.7.0-io-hbase-sink
+title: HBase sink connector
+sidebar_label: HBase sink connector
+original_id: io-hbase-sink
+---
+
+The HBase sink connector pulls the messages from Pulsar topics 
+and persists the messages to HBase tables
+
+## Configuration
+
+The configuration of the HBase sink connector has the following properties.
+
+### Property
+
+| Name | Type|Default | Required | Description |
+|------|---------|----------|-------------|---
+| `hbaseConfigResources` | String|None | false | HBase system configuration `hbase-site.xml` file. |
+| `zookeeperQuorum` | String|None | true | HBase system configuration about `hbase.zookeeper.quorum` value. |
+| `zookeeperClientPort` | String|2181 | false | HBase system configuration about `hbase.zookeeper.property.clientPort` value. |
+| `zookeeperZnodeParent` | String|/hbase | false | HBase system configuration about `zookeeper.znode.parent` value. |
+| `tableName` | None |String | true | HBase table, the value is `namespace:tableName`. |
+| `rowKeyName` | String|None | true | HBase table rowkey name. |
+| `familyName` | String|None | true | HBase table column family name. |
+| `qualifierNames` |String| None | true | HBase table column qualifier names. |
+| `batchTimeMs` | Long|1000l| false | HBase table operation timeout in milliseconds. |
+| `batchSize` | int|200| false | Batch size of updates made to the HBase table. |
+
+### Example
+
+Before using the HBase sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "hbaseConfigResources": "hbase-site.xml",
+        "zookeeperQuorum": "localhost",
+        "zookeeperClientPort": "2181",
+        "zookeeperZnodeParent": "/hbase",
+        "tableName": "pulsar_hbase",
+        "rowKeyName": "rowKey",
+        "familyName": "info",
+        "qualifierNames": [ 'name', 'address', 'age']
+    }
+    ```
+
+
+* YAML
+
+    ```yaml
+    configs:
+        hbaseConfigResources: "hbase-site.xml"
+        zookeeperQuorum: "localhost"
+        zookeeperClientPort: "2181"
+        zookeeperZnodeParent: "/hbase"
+        tableName: "pulsar_hbase"
+        rowKeyName: "rowKey"
+        familyName: "info"
+        qualifierNames: [ 'name', 'address', 'age']
+    ```
+
+    
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/io-hdfs3-sink.md b/site2/website/versioned_docs/version-2.7.0/io-hdfs3-sink.md
new file mode 100644
index 0000000..a65d6c8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-hdfs3-sink.md
@@ -0,0 +1,54 @@
+---
+id: version-2.7.0-io-hdfs3-sink
+title: HDFS3 sink connector
+sidebar_label: HDFS3 sink connector
+original_id: io-hdfs3-sink
+---
+
+The HDFS3 sink connector pulls the messages from Pulsar topics 
+and persists the messages to HDFS files.
+
+## Configuration
+
+The configuration of the HDFS3 sink connector has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `hdfsConfigResources` | String|true| None | A file or a comma-separated list containing the Hadoop file system configuration.<br/><br/>**Example**<br/>'core-site.xml'<br/>'hdfs-site.xml' |
+| `directory` | String | true | None|The HDFS directory where files read from or written to. |
+| `encoding` | String |false |None |The character encoding for the files.<br/><br/>**Example**<br/>UTF-8<br/>ASCII |
+| `compression` | Compression |false |None |The compression code used to compress or de-compress the files on HDFS. <br/><br/>Below are the available options:<br/><li>BZIP2<br/><li>DEFLATE<br/><li>GZIP<br/><li>LZ4<br/><li>SNAPPY|
+| `kerberosUserPrincipal` |String| false| None|The principal account of Kerberos user used for authentication. |
+| `keytab` | String|false|None| The full pathname of the Kerberos keytab file used for authentication. |
+| `filenamePrefix` |String| false |None |The prefix of the files created inside the HDFS directory.<br/><br/>**Example**<br/> The value of topicA result in files named topicA-. |
+| `fileExtension` | String| false | None| The extension added to the files written to HDFS.<br/><br/>**Example**<br/>'.txt'<br/> '.seq' |
+| `separator` | char|false |None |The character used to separate records in a text file. <br/><br/>If no value is provided, the contents from all records are concatenated together in one continuous byte array. |
+| `syncInterval` | long| false |0| The interval between calls to flush data to HDFS disk in milliseconds. |
+| `maxPendingRecords` |int| false|Integer.MAX_VALUE |  The maximum number of records that hold in memory before acking. <br/><br/>Setting this property to 1 makes every record send to disk before the record is acked.<br/><br/>Setting this property to a higher value allows buffering records before flushing them to disk. 
+
+### Example
+
+Before using the HDFS3 sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "hdfsConfigResources": "core-site.xml",
+        "directory": "/foo/bar",
+        "filenamePrefix": "prefix",
+        "compression": "SNAPPY"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        hdfsConfigResources: "core-site.xml"
+        directory: "/foo/bar"
+        filenamePrefix: "prefix"
+        compression: "SNAPPY"
+    ```
diff --git a/site2/website/versioned_docs/version-2.7.0/io-influxdb-sink.md b/site2/website/versioned_docs/version-2.7.0/io-influxdb-sink.md
new file mode 100644
index 0000000..a23ee21
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-influxdb-sink.md
@@ -0,0 +1,108 @@
+---
+id: version-2.7.0-io-influxdb-sink
+title: InfluxDB sink connector
+sidebar_label: InfluxDB sink connector
+original_id: io-influxdb-sink
+---
+
+The InfluxDB sink connector pulls messages from Pulsar topics 
+and persists the messages to InfluxDB.
+
+The InfluxDB sink provides different configurations for InfluxDBv1 and v2 respectively.
+
+## Configuration
+
+The configuration of the InfluxDB sink connector has the following properties.
+
+### Property
+#### InfluxDBv2
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. |
+| `organization` | String| true|" " (empty string)  | The InfluxDB organization to write to. |
+| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. |
+| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB. <br><br>Below are the available options:<li>ns<br><li>us<br><li>ms<br><li>s|
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br><br>Below are the available options:<li>NONE<br><li>BASIC<br><li>HEADERS<br><li>FULL|
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+#### InfluxDBv1
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
+| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. |
+| `password` | String| false|" " (empty string)  | The password used to authenticate to InfluxDB. |
+| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. |
+| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB. <br><br>Below are the available options:<li>ALL<br><li> ANY<br><li>ONE<br><li>QUORUM |
+| `logLevel` | String|false| NONE|The log level for InfluxDB request and response. <br><br>Below are the available options:<li>NONE<br><li>BASIC<br><li>HEADERS<br><li>FULL|
+| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. |
+| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
+| `batchTimeMs` |long|false| 1000L |   The InfluxDB operation time in milliseconds. |
+| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+
+### Example
+Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods.
+#### InfluxDBv2
+* JSON
+    ```json
+    {
+        "influxdbUrl": "http://localhost:9999",
+        "organization": "example-org",
+        "bucket": "example-bucket",
+        "token": "xxxx",
+        "precision": "ns",
+        "logLevel": "NONE",
+        "gzipEnable": false,
+        "batchTimeMs": 1000,
+        "batchSize": 100
+    }
+    ```
+  
+* YAML
+    ```yaml
+    {
+        influxdbUrl: "http://localhost:9999"
+        organization: "example-org"
+        bucket: "example-bucket"
+        token: "xxxx"
+        precision: "ns"
+        logLevel: "NONE"
+        gzipEnable: false
+        batchTimeMs: 1000
+        batchSize: 100
+    }
+    ```
+  
+#### InfluxDBv1
+
+* JSON 
+
+    ```json
+    {
+        "influxdbUrl": "http://localhost:8086",
+        "database": "test_db",
+        "consistencyLevel": "ONE",
+        "logLevel": "NONE",
+        "retentionPolicy": "autogen",
+        "gzipEnable": false,
+        "batchTimeMs": 1000,
+        "batchSize": 100
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    {
+        influxdbUrl: "http://localhost:8086"
+        database: "test_db"
+        consistencyLevel: "ONE"
+        logLevel: "NONE"
+        retentionPolicy: "autogen"
+        gzipEnable: false
+        batchTimeMs: 1000
+        batchSize: 100
+    }
+    ```
diff --git a/site2/website/versioned_docs/version-2.7.0/io-jdbc-sink.md b/site2/website/versioned_docs/version-2.7.0/io-jdbc-sink.md
new file mode 100644
index 0000000..b3d7b07
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-jdbc-sink.md
@@ -0,0 +1,140 @@
+---
+id: version-2.7.0-io-jdbc-sink
+title: JDBC sink connector
+sidebar_label: JDBC sink connector
+original_id: io-jdbc-sink
+---
+
+The JDBC sink connectors allow pulling messages from Pulsar topics 
+and persists the messages to ClickHouse, MariaDB, PostgreSQL, and SQLite.
+
+> Currently, INSERT, DELETE and UPDATE operations are supported.
+
+## Configuration 
+
+The configuration of all JDBC sink connectors has the following properties.
+
+### Property
+
+| Name | Type|Required | Default | Description 
+|------|----------|----------|---------|-------------|
+| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.<br><br>**Note: `userName` is case-sensitive.**|
+| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`. <br><br>**Note: `password` is case-sensitive.**|
+| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events.  |
+| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+
+### Example for ClickHouse
+
+* JSON 
+
+    ```json
+    {
+        "userName": "clickhouse",
+        "password": "password",
+        "jdbcUrl": "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink",
+        "tableName": "pulsar_clickhouse_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-clickhouse-sink"
+    topicName: "persistent://public/default/jdbc-clickhouse-topic"
+    sinkType: "jdbc-clickhouse"    
+    configs:
+        userName: "clickhouse"
+        password: "password"
+        jdbcUrl: "jdbc:clickhouse://localhost:8123/pulsar_clickhouse_jdbc_sink"
+        tableName: "pulsar_clickhouse_jdbc_sink"
+    ```
+
+### Example for MariaDB
+
+* JSON 
+
+    ```json
+    {
+        "userName": "mariadb",
+        "password": "password",
+        "jdbcUrl": "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink",
+        "tableName": "pulsar_mariadb_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-mariadb-sink"
+    topicName: "persistent://public/default/jdbc-mariadb-topic"
+    sinkType: "jdbc-mariadb"    
+    configs:
+        userName: "mariadb"
+        password: "password"
+        jdbcUrl: "jdbc:mariadb://localhost:3306/pulsar_mariadb_jdbc_sink"
+        tableName: "pulsar_mariadb_jdbc_sink"
+    ```
+
+### Example for PostgreSQL
+
+Before using the JDBC PostgreSQL sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "userName": "postgres",
+        "password": "password",
+        "jdbcUrl": "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink",
+        "tableName": "pulsar_postgres_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-postgres-sink"
+    topicName: "persistent://public/default/jdbc-postgres-topic"
+    sinkType: "jdbc-postgres"    
+    configs:
+        userName: "postgres"
+        password: "password"
+        jdbcUrl: "jdbc:postgresql://localhost:5432/pulsar_postgres_jdbc_sink"
+        tableName: "pulsar_postgres_jdbc_sink"
+    ```
+
+For more information on **how to use this JDBC sink connector**, see [connect Pulsar to PostgreSQL](io-quickstart.md#connect-pulsar-to-postgresql).
+
+### Example for SQLite
+
+* JSON 
+
+    ```json
+    {
+        "jdbcUrl": "jdbc:sqlite:db.sqlite",
+        "tableName": "pulsar_sqlite_jdbc_sink"
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    tenant: "public"
+    namespace: "default"
+    name: "jdbc-sqlite-sink"
+    topicName: "persistent://public/default/jdbc-sqlite-topic"
+    sinkType: "jdbc-sqlite"    
+    configs:
+        jdbcUrl: "jdbc:sqlite:db.sqlite"
+        tableName: "pulsar_sqlite_jdbc_sink"
+    ```
diff --git a/site2/website/versioned_docs/version-2.7.0/io-kafka-sink.md b/site2/website/versioned_docs/version-2.7.0/io-kafka-sink.md
new file mode 100644
index 0000000..bb7e7b2
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-kafka-sink.md
@@ -0,0 +1,69 @@
+---
+id: version-2.7.0-io-kafka-sink
+title: Kafka sink connector
+sidebar_label: Kafka sink connector
+original_id: io-kafka-sink
+---
+
+The Kafka sink connector pulls messages from Pulsar topics and persists the messages
+to Kafka topics.
+
+This guide explains how to configure and use the Kafka sink connector.
+
+## Configuration
+
+The configuration of the Kafka sink connector has the following parameters.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+|`acks`|String|true|" " (empty string) |The number of acknowledgments that the producer requires the leader to receive before a request completes. <br/>This controls the durability of the sent records.
+|`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts to batch records together before sending them to brokers.
+|`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in bytes.
+|`topic`|String|true|" " (empty string) |The Kafka topic which receives messages from Pulsar.
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringSerializer | The serializer class for Kafka producers to serialize keys.
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArraySerializer | The serializer class for Kafka producers to serialize values.<br/><br/>The serializer is set by a specific implementation of [`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java).
+|`producerConfigProperties`|Map|false|" " (empty string)|The producer configuration properties to be passed to producers. <br/><br/>**Note:  other properties specified in the connector configuration file take precedence over this configuration**.
+
+
+### Example
+
+Before using the Kafka sink connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "bootstrapServers": "localhost:6667",
+        "topic": "test",
+        "acks": "1",
+        "batchSize": "16384",
+        "maxRequestSize": "1048576",
+        "producerConfigProperties":
+         {
+            "client.id": "test-pulsar-producer",
+            "security.protocol": "SASL_PLAINTEXT",
+            "sasl.mechanism": "GSSAPI",
+            "sasl.kerberos.service.name": "kafka",
+            "acks": "all" 
+         }
+    }
+
+* YAML
+  
+    ```yaml
+    configs:
+        bootstrapServers: "localhost:6667"
+        topic: "test"
+        acks: "1"
+        batchSize: "16384"
+        maxRequestSize: "1048576"
+        producerConfigProperties:
+            client.id: "test-pulsar-producer"
+            security.protocol: "SASL_PLAINTEXT"
+            sasl.mechanism: "GSSAPI"
+            sasl.kerberos.service.name: "kafka"
+            acks: "all"   
+    ```
diff --git a/site2/website/versioned_docs/version-2.7.0/io-kafka-source.md b/site2/website/versioned_docs/version-2.7.0/io-kafka-source.md
new file mode 100644
index 0000000..88998f3
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-kafka-source.md
@@ -0,0 +1,171 @@
+---
+id: version-2.7.0-io-kafka-source
+title: Kafka source connector
+sidebar_label: Kafka source connector
+original_id: io-kafka-source
+---
+
+The Kafka source connector pulls messages from Kafka topics and persists the messages
+to Pulsar topics.
+
+This guide explains how to configure and use the Kafka source connector.
+
+## Configuration
+
+The configuration of the Kafka source connector has the following properties.
+
+### Property
+
+| Name | Type| Required | Default | Description 
+|------|----------|---------|-------------|-------------|
+|  `bootstrapServers` |String| true | " " (empty string) | A comma-separated list of host and port pairs for establishing the initial connection to the Kafka cluster. |
+| `groupId` |String| true | " " (empty string) | A unique string that identifies the group of consumer processes to which this consumer belongs. |
+| `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch response. |
+| `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's offset is periodically committed in the background.<br/><br/> This committed offset is used when the process fails as the position from which a new consumer begins. |
+| `autoCommitIntervalMs` | long|false | 5000 | The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if `autoCommitEnabled` is set to true. |
+| `heartbeatIntervalMs` | long| false | 3000 | The interval between heartbeats to the consumer when using Kafka's group management facilities. <br/><br/>**Note: `heartbeatIntervalMs` must be smaller than `sessionTimeoutMs`**.|
+| `sessionTimeoutMs` | long|false | 30000 | The timeout used to detect consumer failures when using Kafka's group management facility. |
+| `topic` | String|true | " " (empty string)| The Kafka topic which sends messages to Pulsar. |
+|  `consumerConfigProperties` | Map| false | " " (empty string) | The consumer configuration properties to be passed to consumers. <br/><br/>**Note: other properties specified in the connector configuration file take precedence over this configuration**. |
+| `keyDeserializationClass` | String|false | org.apache.kafka.common.serialization.StringDeserializer | The deserializer class for Kafka consumers to deserialize keys.<br/> The deserializer is set by a specific implementation of [`KafkaAbstractSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSource.java).
+| `valueDeserializationClass` | String|false | org.apache.kafka.common.serialization.ByteArrayDeserializer | The deserializer class for Kafka consumers to deserialize values.
+
+
+### Example
+
+Before using the Kafka source connector, you need to create a configuration file through one of the following methods.
+
+* JSON 
+
+    ```json
+    {
+        "bootstrapServers": "pulsar-kafka:9092",
+        "groupId": "test-pulsar-io",
+        "topic": "my-topic",
+        "sessionTimeoutMs": "10000",
+        "autoCommitEnabled": false
+    }
+    ```
+
+* YAML
+
+    ```yaml
+    configs:
+        bootstrapServers: "pulsar-kafka:9092"
+        groupId: "test-pulsar-io"
+        topic: "my-topic"
+        sessionTimeoutMs: "10000"
+        autoCommitEnabled: false
+    ```
+
+## Usage
+
+Here is an example of using the Kafka source connecter with the configuration file as shown previously.
+
+1. Download a Kafka client and a Kafka connector.
+
+    ```bash
+    $ wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.2.1/kafka-clients-0.10.2.1.jar
+
+    $ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/connectors/pulsar-io-kafka-2.4.0.nar
+    ```
+
+2. Create a network.
+   
+   ```bash
+   $ docker network create kafka-pulsar
+   ```
+
+3. Pull a ZooKeeper image and start ZooKeeper.
+   
+   ```bash
+   $ docker pull wurstmeister/zookeeper
+
+   $ docker run -d -it -p 2181:2181 --name pulsar-kafka-zookeeper --network kafka-pulsar wurstmeister/zookeeper
+   ```
+
+4. Pull a Kafka image and start Kafka.
+   
+   ```bash
+   $ docker pull wurstmeister/kafka:2.11-1.0.2
+   
+   $ docker run -d -it --network kafka-pulsar -p 6667:6667 -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=pulsar-kafka -e KAFKA_ZOOKEEPER_CONNECT=pulsar-kafka-zookeeper:2181 --name pulsar-kafka wurstmeister/kafka:2.11-1.0.2
+   ```
+
+5. Pull a Pulsar image and start Pulsar standalone.
+   
+   ```bash
+   $ docker pull apachepulsar/pulsar:2.4.0
+   
+   $ docker run -d -it --network kafka-pulsar -p 6650:6650 -p 8080:8080 -v $PWD/data:/pulsar/data --name pulsar-kafka-standalone apachepulsar/pulsar:2.4.0 bin/pulsar standalone
+   ```
+
+6. Create a producer file _kafka-producer.py_.
+   
+   ```python
+   from kafka import KafkaProducer
+   producer = KafkaProducer(bootstrap_servers='pulsar-kafka:9092')
+   future = producer.send('my-topic', b'hello world')
+   future.get()
+   ```
+
+7. Create a consumer file _pulsar-client.py_.
+
+    ```python
+    import pulsar
+
+    client = pulsar.Client('pulsar://localhost:6650')
+    consumer = client.subscribe('my-topic', subscription_name='my-aa')
+
+    while True:
+        msg = consumer.receive()
+        print msg
+        print dir(msg)
+        print("Received message: '%s'" % msg.data())
+        consumer.acknowledge(msg)
+
+    client.close()
+    ```
+
+8. Copy the following files to Pulsar.
+   
+    ```bash
+    $ docker cp pulsar-io-kafka-2.4.0.nar pulsar-kafka-standalone:/pulsar
+    $ docker cp kafkaSourceConfig.yaml pulsar-kafka-standalone:/pulsar/conf
+    $ docker cp kafka-clients-0.10.2.1.jar pulsar-kafka-standalone:/pulsar/lib
+    $ docker cp pulsar-client.py pulsar-kafka-standalone:/pulsar/
+    $ docker cp kafka-producer.py pulsar-kafka-standalone:/pulsar/
+    ```
+
+9. Open a new terminal window and start the Kafka source connector in local run mode. 
+
+    ```bash
+    $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+    $ ./bin/pulsar-admin source localrun \
+    --archive ./pulsar-io-kafka-2.4.0.nar \
+    --classname org.apache.pulsar.io.kafka.KafkaBytesSource \
+    --tenant public \
+    --namespace default \
+    --name kafka \
+    --destination-topic-name my-topic \
+    --source-config-file ./conf/kafkaSourceConfig.yaml \
+    --parallelism 1
+    ```
+
+10. Open a new terminal window and run the consumer.
+
+    ```bash
+    $ docker exec -it pulsar-kafka-standalone /bin/bash
+
+    $ pip install kafka-python
+
+    $ python3 kafka-producer.py
+    ```
+
+    The following information appears on the consumer terminal window.
+
+    ```bash
+    Received message: 'hello world'
+    ```
+
diff --git a/site2/website/versioned_docs/version-2.7.0/io-kinesis-sink.md b/site2/website/versioned_docs/version-2.7.0/io-kinesis-sink.md
new file mode 100644
index 0000000..c2f68fb
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.7.0/io-kinesis-sink.md
@@ -0,0 +1,73 @@
+---
... 6267 lines suppressed ...