You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@karaf.apache.org by jb...@apache.org on 2019/10/10 13:04:56 UTC

[karaf] branch karaf-4.2.x updated: Some grammatical and spelling fixes for the manual

This is an automated email from the ASF dual-hosted git repository.

jbonofre pushed a commit to branch karaf-4.2.x
in repository https://gitbox.apache.org/repos/asf/karaf.git


The following commit(s) were added to refs/heads/karaf-4.2.x by this push:
     new 8118bea  Some grammatical and spelling fixes for the manual
8118bea is described below

commit 8118beab8186c2127858eba25d2dec8e5c850f71
Author: Colm O hEigeartaigh <co...@apache.org>
AuthorDate: Fri Jun 14 13:25:25 2019 +0100

    Some grammatical and spelling fixes for the manual
---
 .../asciidoc/developer-guide/creating-bundles.adoc |  4 +-
 .../main/asciidoc/developer-guide/debugging.adoc   |  6 +--
 .../developer-guide/developer-commands.adoc        | 12 +++---
 .../main/asciidoc/developer-guide/extending.adoc   |  2 +-
 .../developer-guide/github-contributions.adoc      |  2 +-
 .../developer-guide/karaf-maven-plugin.adoc        | 12 +++---
 .../main/asciidoc/developer-guide/scripting.adoc   | 12 +++---
 .../developer-guide/security-framework.adoc        |  8 ++--
 manual/src/main/asciidoc/index.adoc                | 30 +++++++--------
 manual/src/main/asciidoc/overview.adoc             |  6 +--
 manual/src/main/asciidoc/quick-start.adoc          |  8 ++--
 manual/src/main/asciidoc/update-notes.adoc         |  6 +--
 .../main/asciidoc/user-guide/configuration.adoc    | 28 +++++++-------
 manual/src/main/asciidoc/user-guide/console.adoc   | 12 +++---
 manual/src/main/asciidoc/user-guide/deployers.adoc |  2 +-
 manual/src/main/asciidoc/user-guide/docker.adoc    |  6 +--
 manual/src/main/asciidoc/user-guide/ejb.adoc       |  4 +-
 manual/src/main/asciidoc/user-guide/failover.adoc  |  8 ++--
 .../src/main/asciidoc/user-guide/installation.adoc | 33 ++++++++--------
 manual/src/main/asciidoc/user-guide/instances.adoc |  2 +-
 manual/src/main/asciidoc/user-guide/jms.adoc       | 18 ++++-----
 manual/src/main/asciidoc/user-guide/jta.adoc       |  2 +-
 manual/src/main/asciidoc/user-guide/kar.adoc       | 21 +++++------
 manual/src/main/asciidoc/user-guide/log.adoc       | 28 +++++++-------
 .../src/main/asciidoc/user-guide/monitoring.adoc   |  8 ++--
 manual/src/main/asciidoc/user-guide/obr.adoc       |  4 +-
 .../main/asciidoc/user-guide/os-integration.adoc   | 26 ++++++-------
 .../src/main/asciidoc/user-guide/provisioning.adoc | 37 +++++++++---------
 manual/src/main/asciidoc/user-guide/remote.adoc    | 44 +++++++++++-----------
 manual/src/main/asciidoc/user-guide/scheduler.adoc |  2 +-
 manual/src/main/asciidoc/user-guide/security.adoc  | 28 ++++++++------
 .../src/main/asciidoc/user-guide/start-stop.adoc   | 16 ++++----
 manual/src/main/asciidoc/user-guide/tuning.adoc    | 10 ++---
 manual/src/main/asciidoc/user-guide/urls.adoc      | 42 ++++++++++-----------
 .../src/main/asciidoc/user-guide/webcontainer.adoc |  6 +--
 35 files changed, 248 insertions(+), 247 deletions(-)

diff --git a/manual/src/main/asciidoc/developer-guide/creating-bundles.adoc b/manual/src/main/asciidoc/developer-guide/creating-bundles.adoc
index a34cfff..33bf543 100644
--- a/manual/src/main/asciidoc/developer-guide/creating-bundles.adoc
+++ b/manual/src/main/asciidoc/developer-guide/creating-bundles.adoc
@@ -23,7 +23,7 @@ The examples provides different kind of bundles and services definition:
 ==== Add extended information to bundles
 
 Karaf supports a OSGI-INF/bundle.info file in a bundle.
-This file is extended description of the bundle.
+This file is an extended description of the bundle.
 It supports ASCII character declarations (for adding color, formatting, etc) and some simple Wiki syntax.
 
 Simply add a `src/main/resources/OSGI-INF/bundle.info` file containing, for instance:
@@ -68,7 +68,7 @@ It allows for directly deploying third party dependencies, like Apache Commons L
 root@karaf> bundles:install wrap:mvn:commons-lang/commons-lang/2.4
 ----
 
-The wrap protocol creates a bundle dynamically using the bnd. Confiugurations can be added in the wrap URL:
+The wrap protocol creates a bundle dynamically using the bnd. Configurations can be added in the wrap URL:
 
 * from the shell
 
diff --git a/manual/src/main/asciidoc/developer-guide/debugging.adoc b/manual/src/main/asciidoc/developer-guide/debugging.adoc
index bbba176..926e9d4 100644
--- a/manual/src/main/asciidoc/developer-guide/debugging.adoc
+++ b/manual/src/main/asciidoc/developer-guide/debugging.adoc
@@ -36,7 +36,7 @@ Typical usage is:
 
 ===== Worst Case Scenario
 
-If you end up with a Karaf in a really bad state (i.e. you can not boot it anymore) or you just want to revert to a
+If you end up with Karaf in a really bad state (i.e. you can not boot it anymore) or you just want to revert to a
 clean state quickly, you can safely remove the `data` directory just in the installation directory.  This folder
 contains transient data and will be recreated if removed when you relaunch Karaf.
 You may also want to remove the files in the `deploy` folder to avoid them being automatically installed when Karaf
@@ -85,8 +85,8 @@ bin\karaf.bat
 
 Last, inside your IDE, connect to the remote application (the default port to connect to is 5005).
 
-This option works fine when it is needed to debug a project deployed top of Apache Karaf. Nervertheless, you will be blocked
-if you would like to debug the server Karaf. In this case, you can change the following parameter suspend=y in the
+This option works fine when it is needed to debug a project deployed on top of Apache Karaf. Nervertheless, you will be blocked
+if you would like to debug the Karaf server itself. In this case, you can change the following parameter suspend=y in the
 karaf.bat script file. That will cause the JVM to pause just before running main() until you attach a debugger then it
 will resume the execution.  This way you can set your breakpoints anywhere in the code and you should hit them no matter
 how early in the startup they are.
diff --git a/manual/src/main/asciidoc/developer-guide/developer-commands.adoc b/manual/src/main/asciidoc/developer-guide/developer-commands.adoc
index f4a6cec..75863d4 100644
--- a/manual/src/main/asciidoc/developer-guide/developer-commands.adoc
+++ b/manual/src/main/asciidoc/developer-guide/developer-commands.adoc
@@ -16,7 +16,7 @@
 
 As you can see in the users guide, Apache Karaf is an enterprise ready OSGi container.
 
-It's also a container designed to simplify the life for developers and administrators to get details about the
+It's also a container designed to simplify life for developers and administrators to get details about the
 running container.
 
 ==== Dump
@@ -67,7 +67,7 @@ Diagnostic dump created.
 
 ==== Diagnostic
 
-It's not always easy for the developers to understand why a bundle is not active.
+It's not always easy for developers to understand why a bundle is not active.
 
 It could be because the Activator failed, the Blueprint container start failed, etc.
 
@@ -117,7 +117,7 @@ karaf@root()> bundle:dynamic-import 50
 Enabling dynamic imports on bundle org.ops4j.pax.url.wrap [50]
 ----
 
-The purpose of dynamic import is to allow a bundle to be wired up to packages that may not be knwon about in advance.
+The purpose of dynamic import is to allow a bundle to be wired up to packages that may not be known about in advance.
 When a class is requested, if it cannot be solved via the bundle's existing imports, the dynamic import allows other
 bundles to be considered for a wiring import to be added.
 
@@ -141,7 +141,7 @@ Disabling debug for OSGi framework (felix)
 The `shell:stack-traces-print` command prints the full stack trace when the execution of a command
 throws an exception.
 
-You can enable or disable this behaviour by passing true (to enable) or false (to disable) on the command on the fly:
+You can enable or disable this behaviour by passing true (to enable) or false (to disable) to the command on the fly:
 
 ----
 karaf@root()> stack-traces-print
@@ -223,9 +223,9 @@ org.ops4j.pax.url.wrap [40]
 ==== Watch
 
 The `bundle:watch` command enables watching the local Maven repository for updates on bundles.
-If the bundle file changes on the Maven repository, Apache Karaf will automatically update the bundle.
+If the bundle file changes in the Maven repository, Apache Karaf will automatically update the bundle.
 
-The `bundle:watch` allows you to configure a set of URLs to monitore. All bundles bundles whose location matches the
+The `bundle:watch` allows you to configure a set of URLs to monitor. All bundles whose location matches the
 given URL will be automatically updated. It avoids needing to manually update the bundles or even copy the bundle to the
 system folder.
 
diff --git a/manual/src/main/asciidoc/developer-guide/extending.adoc b/manual/src/main/asciidoc/developer-guide/extending.adoc
index 9944916..cf37aa6 100644
--- a/manual/src/main/asciidoc/developer-guide/extending.adoc
+++ b/manual/src/main/asciidoc/developer-guide/extending.adoc
@@ -24,4 +24,4 @@ See [examples/karaf-command-example] to add your own shell commands.
 
 You can also extend the Apache Karaf WebConsole by providing and installing a webconsole plugin.
 
-A plugin is an OSGi bundle that register a Servlet as an OSGi service with webconsole properties.
+A plugin is an OSGi bundle that registers a Servlet as an OSGi service with webconsole properties.
diff --git a/manual/src/main/asciidoc/developer-guide/github-contributions.adoc b/manual/src/main/asciidoc/developer-guide/github-contributions.adoc
index bc03e00..d9b4f04 100644
--- a/manual/src/main/asciidoc/developer-guide/github-contributions.adoc
+++ b/manual/src/main/asciidoc/developer-guide/github-contributions.adoc
@@ -78,4 +78,4 @@ git format-patch archon/trunk
 
 ----
 
-root of your Karaf source now contains a file named "0001-delivery.patch.txt" (please attach the .txt ending;this will allow commiters to open your patch directly in the browser and give it a short look there) which you should attach to your karaf jira, and ask to commit to the svn trunk
+The root of your Karaf source now contains a file named "0001-delivery.patch.txt" (please attach the .txt ending;this will allow commiters to open your patch directly in the browser and give it a short look there) which you should attach to your karaf jira.
diff --git a/manual/src/main/asciidoc/developer-guide/karaf-maven-plugin.adoc b/manual/src/main/asciidoc/developer-guide/karaf-maven-plugin.adoc
index bab060b..eb0aeb9 100644
--- a/manual/src/main/asciidoc/developer-guide/karaf-maven-plugin.adoc
+++ b/manual/src/main/asciidoc/developer-guide/karaf-maven-plugin.adoc
@@ -483,7 +483,7 @@ The `karaf:assembly` goal creates a Karaf instance (assembly) filesystem using t
     </build>
 ----
 
-By default, the generate Karaf instance is a dynamic distribution (it's started with default set of resources and then you can deploy new applications in this instance).
+By default, the generated Karaf instance is a dynamic distribution (it's started with default set of resources and then you can deploy new applications in this instance).
 
 It's also possible to generate a Karaf instance as a static distribution (kind of immutable):
 
@@ -846,11 +846,11 @@ The `karaf:client` interacts with a running Karaf instance directly from Maven v
 
 |`keyFile`
 |`File`
-|The key file to use to connecto to the running Karaf instance.
+|The key file to use to connect to the running Karaf instance.
 
 |`attempts`
 |`int`
-|The number of tentative to connect to the running Karaf instance. Default value: 0
+|The number of attempts to connect to the running Karaf instance. Default value: 0
 
 |`delay`
 |`int`
@@ -910,11 +910,11 @@ The `karaf:deploy` goal allows you to deploy bundles on a running Karaf instance
 
 |`keyFile`
 |`File`
-|The key file to use to connecto to the running Karaf instance.
+|The key file to use to connect to the running Karaf instance.
 
 |`attempts`
 |`int`
-|The number of tentative to connect to the running Karaf instance. Default value: 0
+|The number of attempts to connect to the running Karaf instance. Default value: 0
 
 |`delay`
 |`int`
@@ -1002,4 +1002,4 @@ This goal requires a local Docker daemon and runs only on Unix. The `docker` com
 |`imageName`
 |`String`
 |The name of the generated Docker image. Default value: karaf
-|===
\ No newline at end of file
+|===
diff --git a/manual/src/main/asciidoc/developer-guide/scripting.adoc b/manual/src/main/asciidoc/developer-guide/scripting.adoc
index 1ae4402..e08c85c 100644
--- a/manual/src/main/asciidoc/developer-guide/scripting.adoc
+++ b/manual/src/main/asciidoc/developer-guide/scripting.adoc
@@ -25,7 +25,7 @@ karaf@root()> echo hello world
 hello world
 ----
 
-You can also assign value to session variables:
+You can also assign a value to session variables:
 
 ----
 karaf@root()> msg = "hello world"
@@ -213,7 +213,7 @@ Functions names are case insensitive.
 
 ==== List, maps, pipes and closures
 
-Using [], you can define array variable:
+Using [], you can define an array variable:
 
 ----
 karaf@root()> list = [1 2 a b]
@@ -245,7 +245,7 @@ karaf@root()> ($.context bundles) | grep -i felix
    51|Active     |   30|org.apache.felix.gogo.runtime (0.10.0)
 ----
 
-You can assign name to script execution. It's what we use for alias:
+You can assign a name to a script execution. It's what we use for alias:
 
 ----
 karaf@root()> echo2 = { echo xxx $args yyy }
@@ -281,7 +281,7 @@ karaf@root>
 
 ==== Built-in variables and commands
 
-Apache Karaf console provides built-in variable very useful for scripting:
+Apache Karaf console provides built-in variables that are very useful for scripting:
 
 * `$args` retrieves the list of script parameters, given to the closure being executed
 * `$1 .. $999` retrieves the nth argument of the closure
@@ -336,7 +336,7 @@ karaf@root()> system:getproperty karaf.name
 root
 ----
 
-It means that you can create object using the `new` directive, and call methods on the objects:
+It means that you can create an object using the `new` directive, and call methods on the objects:
 
 ----
 karaf@root> map = (new java.util.HashMap)
@@ -349,7 +349,7 @@ karaf@root> $map
 
 The following examples show some scripts defined in `etc/shell.init.script`.
 
-The first example show a script to add a value into a configuration list:
+The first example shows a script to add a value into a configuration list:
 
 ----
 #
diff --git a/manual/src/main/asciidoc/developer-guide/security-framework.adoc b/manual/src/main/asciidoc/developer-guide/security-framework.adoc
index d0ac7da..ba638f7 100644
--- a/manual/src/main/asciidoc/developer-guide/security-framework.adoc
+++ b/manual/src/main/asciidoc/developer-guide/security-framework.adoc
@@ -115,7 +115,7 @@ JMX layer), you need to deploy a JAAS configuration with the name `name="karaf"`
 ==== Architecture
 
 Due to constraints in the JAAS specification, one class has to be available for all bundles.
-This class is called ProxyLoginModule and is a LoginModule that acts as a proxy for an OSGi defines LoginModule.
+This class is called ProxyLoginModule and is a LoginModule that acts as a proxy for an OSGi defined LoginModule.
 If you plan to integrate this feature into another OSGi runtime, this class must be made available from the system
 classloader and the related package be part of the boot delegation classpath (or be deployed as a fragment attached to
 the system bundle).
@@ -128,10 +128,10 @@ the bundle containing the real login module.
 
 Karaf itself provides a set of login modules ready to use, depending of the authentication backend that you need.
 
-In addition of the login modules, Karaf also support backend engine. The backend engine is coupled to a login module and
+In addition to the login modules, Karaf also supports backend engines. The backend engine is coupled to a login module and
 allows you to manipulate users and roles directly from Karaf (adding a new user, delete an existing user, etc).
 The backend engine is constructed by a backend engine factory, registered as an OSGi service.
-Some login modules (for security reason for instance) don't provide backend engine.
+Some login modules (for security reasons for instance) don't provide backend engine.
 
 ==== Available realm and login modules
 
@@ -847,7 +847,7 @@ The JAAS specification does not provide means to distinguish between User and Ro
 specification classes. In order to provide means to the application developer to decouple the application from Karaf
 JAAS implementation role policies have been created.
 
-A role policy is a convention that can be adopted by the application in order to identify Roles, without depending from the implementation.
+A role policy is a convention that can be adopted by the application in order to identify Roles, without depending on the implementation.
 Each role policy can be cofigured by setting a "role.policy" and "role.discriminator" property to the login module configuration.
 Currently, Karaf provides two policies that can be applied to all Karaf Login Modules.
 
diff --git a/manual/src/main/asciidoc/index.adoc b/manual/src/main/asciidoc/index.adoc
index cb58b66..889b6db 100644
--- a/manual/src/main/asciidoc/index.adoc
+++ b/manual/src/main/asciidoc/index.adoc
@@ -112,11 +112,11 @@ include::developer-guide/creating-bundles.adoc[]
 
 ==== Blueprint
 
-See https://github.com/apache/example/karaf-blueprint-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-blueprint-example/README.md
 
 ==== SCR
 
-See https://github.com/apache/example/karaf-scr-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-scr-example/README.md
 
 include::developer-guide/archetypes.adoc[]
 
@@ -128,52 +128,52 @@ include::developer-guide/writing-tests.adoc[]
 
 === Dump extender
 
-See https://github.com/apache/karaf/examples/karaf-dump-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-dump-example/README.md
 
 ==== JDBC & JPA
 
-See https://github.com/apache/karaf/examples/karaf-jdbc-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-jdbc-example/README.md
 
-See https://github.com/apache/karaf/examples/karaf-jpa-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-jpa-example/README.md
 
 ==== JMS
 
-See https://github.com/apache/karaf/examples/karaf-jms-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-jms-example/README.md
 
 ==== Custom log appender
 
-See https://github.com/apache/karaf/examples/karaf-log-appender-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-log-appender-example/README.md
 
 ==== Custom JMX MBean
 
-See https://github.com/apache/karaf/examples/karaf-mbean-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-mbean-example/README.md
 
 ==== Working with profiles
 
-See https://github.com/apache/karaf/examples/karaf-profile-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-profile-example/README.md
 
 ==== Servlet
 
-See https://github.com/apache/karaf/examples/karaf-servlet-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-servlet-example/README.md
 
 ==== WAR
 
-See https://github.com/apache/karaf/examples/karaf-war-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-war-example/README.md
 
 ==== REST service
 
-See https://github.com/apache/karaf/examples/karaf-rest-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-rest-example/README.md
 
 ==== SOAP service
 
-See https://github.com/apache/karaf/examples/karaf-soap-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-soap-example/README.md
 
 ==== Scheduling jobs
 
-See https://github.com/apache/karaf/examples/karaf-scheduler-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-scheduler-example/README.md
 
 ==== Custom URL handler
 
-See https://github.com/apache/karaf/examples/karaf-url-namespace-handler-example/README.md
+See https://github.com/apache/karaf/tree/master/examples/karaf-url-namespace-handler-example/README.md
 
 include::developer-guide/github-contributions.adoc[]
diff --git a/manual/src/main/asciidoc/overview.adoc b/manual/src/main/asciidoc/overview.adoc
index ff3bbce..40fb720 100644
--- a/manual/src/main/asciidoc/overview.adoc
+++ b/manual/src/main/asciidoc/overview.adoc
@@ -14,10 +14,10 @@
 
 == Overview
 
-Apache Karaf is a modern and polymorphic container.
+Apache Karaf is a modern polymorphic application container.
 
-Karaf can be used standalone as a container, supporting a wide range of applications and technologies.
-It also supports the "run anywhere" (on any machine with Java, cloud, docker images, ...) using the embedded mode.
+Karaf can be used as a standalone container, supporting a wide range of applications and technologies.
+It also supports the "run anywhere" concept (on any machine with Java, cloud, docker images, ...) using the embedded mode.
 
 It's a lightweight, powerful, and enterprise ready platform.
 
diff --git a/manual/src/main/asciidoc/quick-start.adoc b/manual/src/main/asciidoc/quick-start.adoc
index 5a5cfa0..312ed10 100644
--- a/manual/src/main/asciidoc/quick-start.adoc
+++ b/manual/src/main/asciidoc/quick-start.adoc
@@ -18,7 +18,7 @@ These instructions should help you get Apache Karaf up and running in 5 to 15 mi
 
 === Prerequisites
 
-Karaf requires a Java SE 8 or Java SE 9 environment to run. Refer to http://www.oracle.com/technetwork/java/javase/ for details on how to download and install Java SE 1.8 or greater.
+Karaf requires a Java SE 8 or higher to run. Refer to http://www.oracle.com/technetwork/java/javase/ for details on how to download and install Java SE 1.8 or greater.
 
 * Open a Web browser and access the following URL: http://karaf.apache.org/download.html
 * Download the binary distribution that matches your system (zip for windows, tar.gz for unixes)
@@ -126,7 +126,7 @@ While you will learn in the Karaf user's guide how to fully use and leverage Apa
 Copy and paste the following commands in the console:
 
 ----
-feature:repo-add camel 2.20.0
+feature:repo-add camel
 feature:install deployer camel-blueprint aries-blueprint
 cat > deploy/example.xml <<END
 <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
@@ -145,8 +145,8 @@ cat > deploy/example.xml <<END
 END
 ----
 
-The example installed is using Camel to start a timer every 2 seconds and output a message in the log.
-The previous commands download the Camel features descriptor and install the example feature.
+The example that is installed is using Camel to start a timer every 2 seconds and output a message in the log.
+The previous commands downloaded the Camel features descriptor and installed the example feature.
 
 You can display the log in the shell:
 
diff --git a/manual/src/main/asciidoc/update-notes.adoc b/manual/src/main/asciidoc/update-notes.adoc
index 59673c9..4054184 100644
--- a/manual/src/main/asciidoc/update-notes.adoc
+++ b/manual/src/main/asciidoc/update-notes.adoc
@@ -107,7 +107,7 @@ In term of development, you can still use the blueprint definition as you do in
 
 However, in Karaf 4.x, you can use DS and new annotations and avoid the usage of a blueprint XML.
 
-The new annotations are available: @Service, @Completion, @Parsing, @Reference. It allows you to complete define the command
+The new annotations are available: @Service, @Completion, @Parsing, @Reference. It allows you to completely define the command
 in the command class directly.
 
 To simplify the generation of the code and OSGi headers, Karaf 4.x provides the karaf-services-maven-plugin (in org.apache.karaf.tooling Maven groupId).
@@ -161,7 +161,7 @@ Now the provided goals are:
 * `karaf:archive` to create a tar.gz or zip of a Karaf distribution
 * `karaf:assembly` to create a custom Karaf distribution assembly
 * `karaf:kar` to create a kar file
-* `karaf:verify` to verify and validate Karaf features
+* `karaf:verify` to verify and validate Karaf features
 * `karaf:features-add-to-repository` to recursively copy features XML and content into a folder (repository)
 * `karaf:features-export-meta-data` to extract the metadata from a features XML
 * `karaf:features-generate-descriptor` to generate a features XML
@@ -176,4 +176,4 @@ We encourage users to start a fresh Apache Karaf 4.x container.
 
 If you upgrade an existing container, `lib` and `system` folder have to be updated (just an override copy).
 
-For the `etc` folder, a diff is required as some properties changed and new configurations are available.
+For the `etc` folder, a diff is required as some properties have changed and new configurations are available.
diff --git a/manual/src/main/asciidoc/user-guide/configuration.adoc b/manual/src/main/asciidoc/user-guide/configuration.adoc
index 7625bf2..b63f235 100644
--- a/manual/src/main/asciidoc/user-guide/configuration.adoc
+++ b/manual/src/main/asciidoc/user-guide/configuration.adoc
@@ -18,10 +18,10 @@
 
 Apache Karaf stores and loads all configuration in files located in the `etc` folder.
 
-By default, the `etc` folder is located relatively to the `KARAF_BASE` folder. You can define another location
+By default, the `etc` folder is relative to the `KARAF_BASE` folder. You can define another location
 using the `KARAF_ETC` variable.
 
-Each configuration is identified by a ID (the ConfigAdmin PID). The configuration files name follows the `pid.cfg`
+Each configuration is identified by a ID (the ConfigAdmin PID). The configuration file names follow the `pid.cfg`
 name convention.
 
 For instance, `etc/org.apache.karaf.shell.cfg` means that this file is the file used by the configuration with
@@ -48,7 +48,7 @@ Environment variables can be referenced inside configuration files using the syn
 `property=${env:FOO}` will set "property" to the value of the enviroment variable "FOO"). Default and alternate
 values can be defined for them as well using the same syntax as above.
 
-In Apache Karaf, a configuration is PID with a set of properties attached.
+In Apache Karaf, a configuration is a PID with a set of properties attached.
 
 Apache Karaf automatically loads all `*.cfg` files from the `etc` folder.
 
@@ -82,17 +82,17 @@ Karaf "re-loads" the configuration files every second.
 * `felix.fileinstall.noInitialDelay` is a flag indicating if the configuration file polling starts as soon as Apache
 Karaf starts or wait for a certain time. If `true`, Apache Karaf polls the configuration files as soon as the configuration
 service starts.
-* `felix.fileinstall.log.level` is the log message verbosity level of the configuration polling service. More
-this value is high, more verbose the configuration service is.
+* `felix.fileinstall.log.level` is the log message verbosity level of the configuration polling service. The
+higher this value, the more verbose the configuration service is.
 * `felix.fileinstall.log.default` is the logging framework to use, `jul` meaning Java Util Logging.
 
 You can change the configuration at runtime by directly editing the configuration file.
 
 You can also do the same using the `config:*` commands or the ConfigMBean.
 
-Apache Karaf persists configuration using own persistence manager in case of when available persistence managers do not support that.
+Apache Karaf persists configuration using its own persistence manager in the case of when available persistence managers do not support that.
 Configuration files are placed by default in `KARAF_ETC`, but it could be overridden via variable `storage` in `etc/org.apache.karaf.config.cfg`.
-If you want to disable karaf persistence manager, set storage variable to empty string (`storage=`).
+If you want to disable the Karaf persistence manager, set the storage variable to an empty string (`storage=`).
 
 ==== `config:*` commands
 
@@ -150,7 +150,7 @@ Properties:
 ===== `config:edit`
 
 `config:edit` is the first command to do when you want to change a configuration. `config:edit` command put you
-in edition mode for a given configuration.
+in edit mode for a given configuration.
 
 For instance, you can edit the `org.apache.karaf.log` configuration:
 
@@ -164,7 +164,7 @@ to use other config commands (like `config:property-append`, `config:property-de
 If you provide a configuration PID that doesn't exist yet, Apache Karaf will create a new configuration (and so a new
 configuration file) automatically.
 
-All changes that you do in configuration edit mode are store in your console session: the changes are not directly
+All changes that you do in configuration edit mode are stored in your console session: the changes are not directly
 applied in the configuration. It allows you to "commit" the changes (see `config:update` command) or "rollback" and
 cancel your changes (see `config:cancel` command).
 
@@ -184,9 +184,9 @@ karaf@root()> config:property-list
 
 ===== `config:property-set`
 
-The `config:property-set` command update the value of a given property in the currently edited configuration.
+The `config:property-set` command updates the value of a given property in the currently edited configuration.
 
-For instance, to change the value of the `size` property of previously edited `org.apache.karaf.log` configuration,
+For instance, to change the value of the `size` property of the previously edited `org.apache.karaf.log` configuration,
 you can do:
 
 ----
@@ -262,7 +262,7 @@ Using the `pid` option, you bypass the configuration commit and rollback mechani
 
 ===== `config:property-delete`
 
-The `config:property-delete` command delete a property in the currently edited configuration.
+The `config:property-delete` command deletes a property in the currently edited configuration.
 
 For instance, you previously added a `test` property in `org.apache.karaf.log` configuration. To delete this `test`
 property, you do:
@@ -317,7 +317,7 @@ Properties:
 ----
 
 On the other hand, if you want to "rollback" your changes, you can use the `config:cancel` command. It will cancel all
-changes that you did, and return of the configuration state just before the `config:edit` command. The `config:cancel`
+changes that you did, and return to the configuration state just before the `config:edit` command. The `config:cancel`
 exits from the edit mode.
 
 For instance, you added the test property in the `org.apache.karaf.log` configuration, but it was a mistake:
@@ -339,7 +339,7 @@ Properties:
 
 ===== `config:delete`
 
-The `config:delete` command completely delete an existing configuration. You don't have to be in edit mode to delete
+The `config:delete` command completely deletes an existing configuration. You don't have to be in edit mode to delete
 a configuration.
 
 For instance, you added `my.config` configuration:
diff --git a/manual/src/main/asciidoc/user-guide/console.adoc b/manual/src/main/asciidoc/user-guide/console.adoc
index 72bff8f..08fe392 100644
--- a/manual/src/main/asciidoc/user-guide/console.adoc
+++ b/manual/src/main/asciidoc/user-guide/console.adoc
@@ -276,7 +276,7 @@ You can create your own aliases in the `etc/shell.init.script` file.
 
 ===== Key binding
 
-Like on most Unix environment, Karaf console support some key bindings:
+Like on most Unix environments, the Karaf console supports some key bindings:
 
 * the arrows key to navigate in the commands history
 * CTRL-D to logout/shutdown Karaf
@@ -297,7 +297,7 @@ blueprint-web                 | 4.0.0                            |          | Un
 
 ===== Grep, more, find, ...
 
-Karaf console provides some core commands similar to Unix environment:
+Karaf console provides some core commands similar to a Unix environment:
 
 * `shell:alias` creates an alias to an existing command
 * `shell:cat` displays the content of a file or URL
@@ -429,7 +429,7 @@ if { $foo equals "foo" } {
 
 [NOTE]
 ====
-The spaces are important when writing script.
+The spaces are important when writing scripts.
 For instance, the following script is not correct:
 
 ----
@@ -447,14 +447,14 @@ because a space is missing after the `if` statement.
 ====
 
 As for the aliases, you can create init scripts in the `etc/shell.init.script` file.
-You can also named you script with an alias. Actually, the aliases are just scripts.
+You can also name your script with an alias. Actually, the aliases are just scripts.
 
 See the Scripting section of the developers guide for details.
 
 ==== Security
 
-The Apache Karaf console supports a Role Based Access Control (RBAC) security mechanism. It means that depending of
-the user connected to the console, you can define, depending of the user's groups and roles, the permission to execute
+The Apache Karaf console supports a Role Based Access Control (RBAC) security mechanism. It means that for
+the user connected to the console, you can define, depending on the user's groups and roles, the permission to execute
 some commands, or limit the values allowed for the arguments.
 
 Console security is detailed in the link:security[Security section] of this user guide.
diff --git a/manual/src/main/asciidoc/user-guide/deployers.adoc b/manual/src/main/asciidoc/user-guide/deployers.adoc
index dd44d9b..7f07553 100644
--- a/manual/src/main/asciidoc/user-guide/deployers.adoc
+++ b/manual/src/main/asciidoc/user-guide/deployers.adoc
@@ -68,7 +68,7 @@ By default, Apache Karaf provides a set of deployers:
 
 The Blueprint deployer is able to handle plain Blueprint XML configuration files.
 
-The Blueprint deployer is able to transform "on the fly" any Blueprint XML file into valid OSGi bundle.
+The Blueprint deployer is able to transform "on the fly" any Blueprint XML file into a valid OSGi bundle.
 
 The generated OSGi MANIFEST will contain the following headers:
 
diff --git a/manual/src/main/asciidoc/user-guide/docker.adoc b/manual/src/main/asciidoc/user-guide/docker.adoc
index 5733ee4..c243bf3 100644
--- a/manual/src/main/asciidoc/user-guide/docker.adoc
+++ b/manual/src/main/asciidoc/user-guide/docker.adoc
@@ -18,7 +18,7 @@ Apache Karaf provides Docker resources allowing you to easily create your own im
 
 Official Karaf docker image are also available on Docker Hub.
 
-But, Apache Karaf also provides a docker feature allows you to:
+But, Apache Karaf also provides a docker feature that allows you to:
 
 - manipulate Docker containers directly from Apache Karaf
 - create a Docker container based on the current running Apache Karaf instance (named provisioning)
@@ -112,7 +112,7 @@ machine where Apache Karaf instance is running or a remote Docker machine.
 
 The location of the Docker backend (URL) can be specified as an option to the `docker:*` commands. By default, Karaf Docker
 feature uses `http://localhost:2375`. Please, take a look on the Docker documentation how to enable remote API using HTTP
-for Docker daemon. As short notice, you just have to enable `tcp` transport connector enabled for the docker daemon.
+for Docker daemon. In a nutshell, you just have to enable the `tcp` transport connector for the docker daemon.
 You have to do it using the `-H` option on `dockerd`:
 
 ----
@@ -290,7 +290,7 @@ You can also use the containers attribute on the `DockerMBean` JMX MBean or the
 
 ==== Provision Docker container
 
-Provisioning is a specific way of creating container based on the current running Karaf instance: it creates a Docker container using the current running Apache Karaf instance `karaf.base`.
+Provisioning is a specific way of creating a container based on the current running Karaf instance: it creates a Docker container using the current running Apache Karaf instance `karaf.base`.
 
 You can then reuse this container to create a Docker image and to duplicate the container on another Docker backend via dockerhub.
 
diff --git a/manual/src/main/asciidoc/user-guide/ejb.adoc b/manual/src/main/asciidoc/user-guide/ejb.adoc
index 4fb26e2..40356d5 100644
--- a/manual/src/main/asciidoc/user-guide/ejb.adoc
+++ b/manual/src/main/asciidoc/user-guide/ejb.adoc
@@ -35,7 +35,7 @@ openejb.deployments.classpath.exclude=bundle:*
 openejb.deployments.classpath.filter.descriptors=true
 ----
 
-Due to some OpenEJB version constraint, you also have to update the `etc/jre.properties` by changing the version of
+Due to some OpenEJB version constraints, you also have to update the `etc/jre.properties` by changing the version of
 the `javax.xml.namespace` package, and remove the version of the `javax.annotation` package (provided by Geronimo
 Annotation API spec bundle, used by OpenEJB):
 
@@ -88,4 +88,4 @@ A custom distribution of Apache Karaf embedding OpenEJB is available in the Apac
 
 The name of this custom distribution is KarafEE: https://svn.apache.org/repos/asf/tomee/karafee/
 
-However, this project is now "deprecated", and all resources from KarafEE will move directly in Apache Karaf soon.
\ No newline at end of file
+However, this project is now "deprecated", and all resources from KarafEE will move directly to Apache Karaf soon.
diff --git a/manual/src/main/asciidoc/user-guide/failover.adoc b/manual/src/main/asciidoc/user-guide/failover.adoc
index e9ace15..59f3664 100644
--- a/manual/src/main/asciidoc/user-guide/failover.adoc
+++ b/manual/src/main/asciidoc/user-guide/failover.adoc
@@ -88,7 +88,7 @@ karaf.lock.lostThreshold=0
 
 * `karaf.lock` property enabled the HA/failover mechanism
 * `karaf.lock.class` property contains the class name providing the lock implementation. The `org.apache.karaf.main.lock.DefaultJDBCLock`
- is the most generic database lock system implementation. Apache Karaf supports lock system for specific databases (see later for details).
+ is the most generic database lock system implementation. Apache Karaf supports lock systems for specific databases (see later for details).
 * `karaf.lock.level` property is the container-level locking (see later for details).
 * `karaf.lock.delay` property is the interval period (in milliseconds) to check if the lock has been released or not.
 * `karaf.lock.lostThreshold` property is the count of attempts to re-acquire the lock before shutting down.
@@ -112,7 +112,7 @@ The `sample` database will be created automatically if it does not exist.
 [NOTE]
 ====
 If the connection to the database is lost, the master instance tries to gracefully shutdown, allowing a slave instance to
-become the master when the database is back. The former master instance will required a manual restart.
+become the master when the database is back. The former master instance will require a manual restart.
 ====
 
 *Lock on Oracle*
@@ -246,12 +246,12 @@ As reminder, the bundles start levels are specified in `etc/startup.properties`,
 
 [NOTE]
 ====
-Using 'hot' standby means that the slave instances are running and bind some ports. So, if you use master and slave instances on the same machine, you have
+Using 'hot' standby means that the slave instances are running and bound to some ports. So, if you use master and slave instances on the same machine, you have
 to update the slave configuration to bind the services (ssh, JMX, etc) on different port numbers.
 ====
 
 ===== Cluster (active/active)
 
-Apache Karaf doesn't natively support cluster. By cluster, we mean several active instances, synchronized with each other.
+Apache Karaf doesn't natively support clustering. By cluster, we mean several active instances, synchronized with each other.
 
 However, http://karaf.apache.org/index/subprojects/cellar.html[Apache Karaf Cellar] can be installed to provide cluster support.
diff --git a/manual/src/main/asciidoc/user-guide/installation.adoc b/manual/src/main/asciidoc/user-guide/installation.adoc
index 79c2948..59c8dbb 100644
--- a/manual/src/main/asciidoc/user-guide/installation.adoc
+++ b/manual/src/main/asciidoc/user-guide/installation.adoc
@@ -14,7 +14,7 @@
 
 === Installation
 
-Apache Karaf is a lightweight container, very easy to install and administrate, on both Unix and Windows platforms.
+Apache Karaf is a lightweight container, that is very easy to install and administer, on both Unix and Windows platforms.
 
 ==== Requirements
 
@@ -29,7 +29,7 @@ Apache Karaf is a lightweight container, very easy to install and administrate,
 
 *Environment:*
 
-* Java SE 1.7.x or greater (http://www.oracle.com/technetwork/java/javase/).
+* Java SE 1.8 or greater (http://www.oracle.com/technetwork/java/javase/).
 * The JAVA_HOME environment variable must be set to the directory where the Java runtime is installed,
 
 ==== Using Apache Karaf binary distributions
@@ -38,7 +38,7 @@ Apache Karaf is available in two distributions, both as a tar.gz and zip archive
 
 The "default" distribution is a "ready to use" distribution, with pre-installed features.
 
-The "minimal" distribution is like the minimal distributions that you can find for most of Unix distributions.
+The "minimal" distribution is like the minimal distributions that you can find for most of the Unix distributions.
 Only the core layer is packaged, most of the features and bundles are downloaded from Internet at bootstrap.
 It means that Apache Karaf minimal distribution requires an Internet connection to start correctly.
 The features provided by the "minimal" distribution are exactly the same as in the "default" distribution, the difference
@@ -51,9 +51,9 @@ is that the minimal distribution will download the features from Internet.
 The JAVA_HOME environment variable has to be correctly defined. To accomplish that, press Windows key and Break key together, switch to "Advanced" tab and click on "Environment Variables".
 ====
 
-. From a browser, navigate to http://karaf.apache.org/index/community/download.html.
+. From a browser, navigate to http://karaf.apache.org/download.html.
 . Download Apache Karaf binary distribution in the zip format: `apache-karaf-4.0.0.zip`.
-. Extract the files from the zip file into a directory of your choice (it's the `KARAF_HOME`.
+. Extract the files from the zip file into a directory of your choice (it's the `KARAF_HOME`).
 
 [NOTE]
 ====
@@ -88,13 +88,12 @@ export JAVA_HOME=....
 ----
 ====
 
-. From a browser, navigate to http://karaf.apache.org/index/community/download.html.
+. From a browser, navigate to http://karaf.apache.org/download.html.
 . Download Apache Karaf binary distribution in the tar.gz format: `apache-karaf-4.0.0.tar.gz`.
 . Extract the files from the tar.gz file into a directory of your choice (it's the `KARAF_HOME`). For example:
 
 ----
-gunzip apache-karaf-4.0.0.tar.gz
-tar xvf apache-karaf-4.0.0.tar
+tar zxvf apache-karaf-4.0.0.tar.gz
 ----
 
 [NOTE]
@@ -136,9 +135,9 @@ When/if the link:++https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=881297++[De
 ==== Post-Installation steps
 
 Thought it is not always required, it is strongly advised to set up the `JAVA_HOME` environment property to point to the JDK you want Apache Karaf to use before starting it.
-This property is used to locate the `java` executable and should be configured to point to the home directory of the Java SE 7 installation.
+This property is used to locate the `java` executable and should be configured to point to the home directory of the Java SE 8 installation.
 
-By default, all Apache Karaf files are "gather" in one directory: the `KARAF_HOME`.
+By default, all Apache Karaf files are "gathered" in one directory: `KARAF_HOME`.
 
 You can define your own directory layout, by using some Karaf environment variables:
 
@@ -156,18 +155,18 @@ If you intend to build Apache Karaf from the sources, the requirements are a bit
 
 *Environment:*
 
-* Java SE Development Kit 1.7.x or greater (http://www.oracle.com/technetwork/java/javase/).
+* Java SE Development Kit 1.8 or greater (http://www.oracle.com/technetwork/java/javase/).
 * Apache Maven 3.5.0 or greater (http://maven.apache.org/download.html).
 
 ===== Building on Windows platform
 
 You can get the Apache Karaf sources from:
 
-* the sources distribution `apache-karaf-4.0.0-src.zip` available at http://karaf.apache.org/index/community/download.html. Extract the files in the directory of your choice.
-* by checkout of the git repository:
+* the sources distribution `apache-karaf-4.0.0-src.zip` available at http://karaf.apache.org/download.html. Extract the files in the directory of your choice.
+* by checking out the git repository:
 
 ----
-git clone https://git-wip-us.apache.org/repos/asf/karaf.git karaf
+git clone https://github.com/apache/karaf karaf
 ----
 
 Use Apache Maven to build Apache Karaf:
@@ -191,11 +190,11 @@ Now, you can find the built binary distribution in `assemblies\apache-karaf\targ
 
 You can get the Apache Karaf sources from:
 
-* the sources distribution `apache-karaf-4.0.0-src.tar.gz` available at http://karaf.apache.org/index/community/download.html. Extract the files in the directory of your choice.
-* by checkout of the git repository:
+* the sources distribution `apache-karaf-4.0.0-src.tar.gz` available at http://karaf.apache.org/download.html. Extract the files in the directory of your choice.
+* by checking out the git repository:
 
 ----
-git clone https://git-wip-us.apache.org/repos/asf/karaf.git karaf
+git clone https://github.com/apache/karaf karaf
 ----
 
 Use Apache Maven to build Apache Karaf:
diff --git a/manual/src/main/asciidoc/user-guide/instances.adoc b/manual/src/main/asciidoc/user-guide/instances.adoc
index 84ef494..4094c1b 100644
--- a/manual/src/main/asciidoc/user-guide/instances.adoc
+++ b/manual/src/main/asciidoc/user-guide/instances.adoc
@@ -36,7 +36,7 @@ As shown in the following example, `instance:create` causes the runtime to creat
 karaf@root()> instance:create test
 ----
 
-The new instance is fresh Apache Karaf instance. It uses default configuration files set, as you install a fresh Karaf distribution.
+The new instance is a fresh Apache Karaf instance. It uses the same default configuration files, as when you install a fresh Karaf distribution.
 
 You can enable the verbose mode for the `instance:create` command using the `-v` option:
 
diff --git a/manual/src/main/asciidoc/user-guide/jms.adoc b/manual/src/main/asciidoc/user-guide/jms.adoc
index 9ee93a7..8f1265a 100644
--- a/manual/src/main/asciidoc/user-guide/jms.adoc
+++ b/manual/src/main/asciidoc/user-guide/jms.adoc
@@ -44,7 +44,7 @@ karaf@root()> feature:install activemq-broker
 
 The `activemq-broker` feature installs:
 
-* a Apache ActiveMQ broker directly in Apache Karaf, bind to the `61616` port number by default.
+* a Apache ActiveMQ broker directly in Apache Karaf, binding to the `61616` port number by default.
 * the Apache ActiveMQ WebConsole bound to `http://0.0.0.0:8181/activemqweb` by default.
 
 The Apache Karaf `jms` feature provides an OSGi service to create/delete JMS connection factories in the container
@@ -100,7 +100,7 @@ OPTIONS
 ----
 
 * the `name` argument is required. It's the name of the JMS connection factory. The name is used to identify the connection factory, and to create the connection factory definition file (`deploy/connectionfactory-[name].xml`).
-* the `-t` (`--type`) option is required. It's the type of the JMS connection factory. Currently on `activemq` and `webspheremq` type are supported. If you want to use another type of JMS connection factory, you can create the `deploy/connectionfactory-[name].xml` file by hand (using one as template).
+* the `-t` (`--type`) option is required. It's the type of the JMS connection factory. Currently only `activemq` and `webspheremq` types are supported. If you want to use another type of JMS connection factory, you can create the `deploy/connectionfactory-[name].xml` file by hand (using one as template).
 * the `--url` option is required. It's the URL used by the JMS connection factory to connect to the broker. If the type is `activemq`, the URL looks like `tcp://localhost:61616`. If the type is `webspheremq`, the URL looks like `host/port/queuemanager/channel`.
 * the `-u` (`--username`) option is optional (karaf by default). In the case of the broker requires authentication, it's the username used.
 * the `-p` (`--password`) option is optional (karaf by default). In the case of the broker requires authentication, it's the password used.
@@ -174,7 +174,7 @@ version  | 5.9.0
 
 You can see the JMS broker product and version.
 
-If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+If the JMS broker requires authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
 
 ====== `jms:queues`
 
@@ -189,7 +189,7 @@ MyQueue
 
 where `/jms/test` is the name of the JMS connection factory.
 
-If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+If the JMS broker requires authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
 
 [NOTE]
 ====
@@ -210,7 +210,7 @@ MyTopic
 
 where `/jms/test` is the name of the JMS connection factory.
 
-If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+If the JMS broker requires authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
 
 [NOTE]
 ====
@@ -228,7 +228,7 @@ For instance, to send a message containing `Hello World` in the `MyQueue` queue,
 karaf@root()> jms:send /jms/test MyQueue "Hello World"
 ----
 
-If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+If the JMS broker requires authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
 
 ====== `jms:consume`
 
@@ -243,7 +243,7 @@ karaf@root()> jms:consume /jms/test MyQueue
 
 If you want to consume only some messages, you can define a selector using the `-s` (`--selector`) option.
 
-If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+If the JMS broker requires authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
 
 [NOTE]
 ====
@@ -264,7 +264,7 @@ Messages Count
 8
 ----
 
-If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+If the JMS broker requires authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
 
 ====== `jms:browse`
 
@@ -328,4 +328,4 @@ The `Connectionfactories` attribute provides the list of all JMS connection fact
 * `TabularData browse(connectionFactory, queue, selector, username, password)` browses a JMS queue and provides a table of JMS messages.
 * `send(connectionFactory, queue, content, replyTo, username, password)` sends a JMS message to a target queue.
 * `int consume(connectionFactory, queue, selector, username, password)` consumes JMS messages from a JMS queue.
-* `int move(connectionFactory, source, destination, selector, username, password)` moves messages from a JMS queue to another.
\ No newline at end of file
+* `int move(connectionFactory, source, destination, selector, username, password)` moves messages from a JMS queue to another.
diff --git a/manual/src/main/asciidoc/user-guide/jta.adoc b/manual/src/main/asciidoc/user-guide/jta.adoc
index eafd7df..89ddb2c 100644
--- a/manual/src/main/asciidoc/user-guide/jta.adoc
+++ b/manual/src/main/asciidoc/user-guide/jta.adoc
@@ -28,7 +28,7 @@ However, the `transaction` feature is installed (as a transitive dependency) whe
 ===== Apache Aries Transaction and ObjectWeb HOWL
 
 The `transaction` feature uses Apache Aries and ObjectWeb HOWL. Aapache Aries Transaction "exposes" the transaction
-manager as OSGi service. The actual implementation of the transaction manager is ObjectWeb HOWL.
+manager as an OSGi service. The actual implementation of the transaction manager is ObjectWeb HOWL.
 
 ObjectWeb HOWL is a logger implementation providing features required by the ObjectWeb JOTM project, with a public API
 that is generally usable by any Transaction Manager.
diff --git a/manual/src/main/asciidoc/user-guide/kar.adoc b/manual/src/main/asciidoc/user-guide/kar.adoc
index fca3caf..3d40718 100644
--- a/manual/src/main/asciidoc/user-guide/kar.adoc
+++ b/manual/src/main/asciidoc/user-guide/kar.adoc
@@ -16,17 +16,15 @@
 
 As described in the link:provisioning[Provisioning section], Apache Karaf features describe applications.
 
-A feature defines different resources to resolve using URL (for instance, bundles URLs, or configuration files URLs).
+A feature defines different resources to resolve using URLs (for instance, bundles URLs, or configuration files URLs).
 As described in the [Artifacts repositories and URLs section|urls], Apache Karaf looks for artifacts (bundles,
 configuration files, ...) in the artifact repositories.
-Apache Karaf may require to download artifacts from remote repositories.
+Apache Karaf may have to download artifacts from remote repositories.
 
 Apache Karaf provides a special type of artifact that package a features XML and all resources described in the features
 of this XML. This artifact is named a KAR (KAraf aRchive).
 
-A KAR file is a zip archive containing the
-
-Basically, the kar format is a jar (so a zip file) which contains a set of feature descriptor and bundle jar files.
+A KAR file is essentially a jar (so a zip file) which contains a set of feature descriptor and bundle jar files.
 
 A KAR file contains a `repository` folder containing:
 
@@ -104,13 +102,12 @@ Apache Karaf provides a Maven plugin: `karaf-maven-plugin`.
 The Apache Karaf Maven plugin provides the `kar` goal.
 
 The `kar` goal does:
+
 . Reads all features specified in the features XML.
 . For each feature described in the features XML, the goal resolves the bundles described in the feature.
 . The goal finally packages the features XML, and the resolved bundles in a zip file.
 
-For instance, the following Maven POM create `my-kar.kar`
-
-For instance, you can use the following POM to create a kar:
+For instance, you can use the following POM to create `my-kar.kar`:
 
 ----
 <?xml version="1.0" encoding="UTF-8"?>
@@ -143,7 +140,7 @@ To create the KAR file, simply type:
 ~$ mvn install
 ----
 
-Uou will have your kar in the `target` directory.
+The kar will be installed in the `target` directory.
 
 ==== Commands
 
@@ -222,7 +219,7 @@ is supported by the `kar:install` command:
 karaf@root()> kar:install file:/tmp/my-kar-1.0-SNAPSHOT.kar
 ----
 
-The KAR file is uncompressed and populated the `KARAF_BASE/system` folder.
+The KAR file is uncompressed and used to populate the `KARAF_BASE/system` folder.
 
 The Apache Karaf KAR service is looking for features XML files in the KAR file, registers the features XML and automatically
 installs all features described in the features repositories present in the KAR file.
@@ -231,7 +228,7 @@ Optionally, you can control if the bundles should be automatically started or no
 
 ===== `kar:uninstall`
 
-The `kar:uninstall` command uninstall a KAR file (identified by a name).
+The `kar:uninstall` command uninstalls a KAR file (identified by a name).
 
 By uninstall, it means that:
 
@@ -290,7 +287,7 @@ noAutoStartBundles=false
 #karStorage=${karaf.data}/kar
 ----
 
-By default, when the KAR deployer install features, by default, it refresh the bundles already installed.
+By default, when the KAR deployer installs features, it refreshes the bundles already installed.
 You can disable the automatic bundles refresh by setting the `noAutoRefreshBundles` property to `false`.
 
 ==== JMX KarMBean
diff --git a/manual/src/main/asciidoc/user-guide/log.adoc b/manual/src/main/asciidoc/user-guide/log.adoc
index 2105a62..21e5da7 100644
--- a/manual/src/main/asciidoc/user-guide/log.adoc
+++ b/manual/src/main/asciidoc/user-guide/log.adoc
@@ -25,7 +25,7 @@ It supports:
 * the SLF4J framework
 * the native Java Util Logging framework
 
-It means that the applications can use any logging framework, Apache Karaf will use the central log system to manage the
+It means that applications can use any logging framework, Apache Karaf will use the central log system to manage the
 loggers, appenders, etc.
 
 ==== Configuration files
@@ -34,7 +34,7 @@ The initial log configuration is loaded from `etc/org.ops4j.pax.logging.cfg`.
 
 This file is a link:http://logging.apache.org/log4j/1.2/manual.html[standard Log4j configuration file].
 
-You find the different Log4j element:
+You find the different Log4j elements:
 
 * loggers
 * appenders
@@ -107,7 +107,7 @@ To enable it, you have to add the `stdout` appender to the `rootLogger`:
 log4j.rootLogger=INFO, out, stdout, osgi:*
 ----
 
-The `out` appender is the default one. It's rolling file appender that maintain and rotate 10 log files of 1MB each.
+The `out` appender is the default one. It's a rolling file appender that maintains and rotates 10 log files of 1MB each.
 The log files are located in `data/log/karaf.log` by default.
 
 The `sift` appender is not enabled by default. This appender allows you to have one log file per deployed bundle.
@@ -163,8 +163,8 @@ A default configuration in `etc/log4j2.xml` could be:
 ==== `karaf.log.console` property
 
 Before Karaf starts _proper_ logging facilities (pax-logging), it may configure `java.util.logging`. Standard
-Java logging is used initially by `Main` class and `org.apache.karaf.main.lock.Lock` implementations.
-In order to configure logging level, please set system property `karaf.log.console` to one of standard JUL
+Java logging is used initially by the `Main` class and `org.apache.karaf.main.lock.Lock` implementations.
+In order to configure the logging level, please set the system property `karaf.log.console` to one of the standard JUL
 levels:
 
 * `SEVERE` (highest value)
@@ -243,7 +243,7 @@ karaf@root()> log:display -n 5
 2015-07-01 06:53:24,501 | INFO  | FelixStartLevel  | RegionsPersistenceImpl           | 78 - org.apache.karaf.region.persist - 4.0.0 | Loading region digraph persistence
 ----
 
-You can also limit the number of entries stored and retain using the `size` property in `etc/org.apache.karaf.log.cfg` file:
+You can also limit the number of entries stored and retained using the `size` property in the `etc/org.apache.karaf.log.cfg` file:
 
 ----
 #
@@ -257,7 +257,7 @@ size = 500
 By default, each log level is displayed with a different color: ERROR/FATAL are in red, DEBUG in purple, INFO in cyan, etc.
 You can disable the coloring using the `--no-color` option.
 
-The log entries format pattern doesn't use the conversion pattern define in `etc/org.ops4j.pax.logging.cfg` file.
+The log entries format pattern doesn't use the conversion pattern defined in `etc/org.ops4j.pax.logging.cfg` file.
 By default, it uses the `pattern` property defined in `etc/org.apache.karaf.log.cfg`.
 
 ----
@@ -379,7 +379,7 @@ By it also accepts the DEFAULT special keyword.
 
 The purpose of the DEFAULT keyword is to delete the current level of the logger (and only the level, the other properties
 like appender are not deleted)
-in order to use the level of the logger parent (logger are hierarchical).
+in order to use the level of the logger parent (loggers are hierarchical).
 
 For instance, you have defined the following loggers (in `etc/org.ops4j.pax.logging.cfg` file):
 
@@ -419,7 +419,7 @@ my.logger.custom=appender2
 
 It means that, at runtime, the `my.logger.custom` logger uses the level of its parent `my.logger`, so `INFO`.
 
-Now, if we use DEFAULT keyword with the `my.logger` logger:
+Now, if we use the DEFAULT keyword with the `my.logger` logger:
 
 ----
 karaf@root()> log:set DEFAULT my.logger
@@ -435,7 +435,7 @@ my.logger.custom=appender2
 
 So, both `my.logger.custom` and `my.logger` use the log level of the parent `rootLogger`.
 
-It's not possible to use DEFAULT keyword with the `rootLogger` and it doesn't have parent.
+It's not possible to use the DEFAULT keyword with the `rootLogger` as it doesn't have a parent.
 
 ===== `log:tail`
 
@@ -474,7 +474,7 @@ The LogMBean object name is `org.apache.karaf:type=log,name=*`.
 
 ===== Filters
 
-You can use filters on appender. Filters allow log events to be evaluated to determine if or how they should be published.
+You can use filters on an appender. Filters allow log events to be evaluated to determine if or how they should be published.
 
 Log4j provides ready to use filters:
 
@@ -543,11 +543,11 @@ log4j.appender.jms=org.apache.log4j.net.JMSAppender
 
 ===== Error handlers
 
-Sometime, appenders can fail. For instance, a RollingFileAppender tries to write on the filesystem but the filesystem is full, or a JMS appender tries to send a message but the JMS broker is not there.
+Sometime, appenders can fail. For instance, a RollingFileAppender tries to write to the filesystem but the filesystem is full, or a JMS appender tries to send a message but the JMS broker is not there.
 
-As log can be very critical to you, you have to be inform that the log appender failed.
+As logs can be very critical to you, you have to be informed that the log appender failed.
 
-It's the purpose of the error handlers. Appenders may delegate their error handling to error handlers, giving a chance to react to this appender errors.
+This is the purpose of the error handlers. Appenders may delegate their error handling to error handlers, giving a chance to react to the errors of the appender.
 
 You have two error handlers available:
 
diff --git a/manual/src/main/asciidoc/user-guide/monitoring.adoc b/manual/src/main/asciidoc/user-guide/monitoring.adoc
index 34a94ab..fc65480 100644
--- a/manual/src/main/asciidoc/user-guide/monitoring.adoc
+++ b/manual/src/main/asciidoc/user-guide/monitoring.adoc
@@ -18,7 +18,7 @@ Apache Karaf provides a complete JMX layer.
 
 You can remotely connect to a running Apache Karaf instance using any JMX client (like jconsole).
 
-The Apache Karaf features provide a set of MBeans, dedicating for the monitoring and management.
+The Apache Karaf features provide a set of MBeans, dedicating to monitoring and management.
 
 ==== Connecting
 
@@ -32,7 +32,7 @@ The JMX URL to use by default is:
 service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root
 ----
 
-If don't need the remote JMX at all, users can remove
+If you don't need the remote JMX at all, users can remove
 
 ----
 -Dcom.sun.management.jmxremote
@@ -40,8 +40,8 @@ If don't need the remote JMX at all, users can remove
 
 from bin/karaf|bin/karaf.bat to avoid opening the RMI listening port.
 
-You have to provide an username and password to access to the JMX layer.
-The JMX layer user the security framework, and so, by default, it uses the users defined in `etc/users.properties`.
+You have to provide an username and password to access the JMX layer.
+The JMX layer uses the security framework, and so, by default, it uses the users defined in `etc/users.properties`.
 
 You can change the port numbers of the JMX layer in the `etc/org.apache.karaf.management.cfg` configuration file.
 
diff --git a/manual/src/main/asciidoc/user-guide/obr.adoc b/manual/src/main/asciidoc/user-guide/obr.adoc
index 94f18fa..e3f12c8 100644
--- a/manual/src/main/asciidoc/user-guide/obr.adoc
+++ b/manual/src/main/asciidoc/user-guide/obr.adoc
@@ -30,7 +30,7 @@ OBR is an optional Apache Karaf feature. You have to install the `obr` feature t
 karaf@root()> feature:install obr
 ----
 
-The OBR feature turns Apache Karaf as an OBR client. It means that Apache Karaf can use a OBR repository to the installation
+The OBR feature turns Apache Karaf into an OBR client. It means that Apache Karaf can use a OBR repository for the installation
 of the bundles, and during the installation of the features.
 
 The installation of the `obr` feature adds in Apache Karaf:
@@ -86,7 +86,7 @@ Index | OBR URL
 
 ===== `obr:url-refresh`
 
-The `obr:url-refresh` command refresh an OBR repository (reloading the URL).
+The `obr:url-refresh` command refreshes an OBR repository (reloading the URL).
 
 The OBR service doesn't take "on the fly" the changes performed on an OBR repository `repository.xml`. You have to
 reload the `repository.xml` URL to take the changes. It's the purpose of the `obr:url-refresh` command.
diff --git a/manual/src/main/asciidoc/user-guide/os-integration.adoc b/manual/src/main/asciidoc/user-guide/os-integration.adoc
index 9af02e9..b7189de 100644
--- a/manual/src/main/asciidoc/user-guide/os-integration.adoc
+++ b/manual/src/main/asciidoc/user-guide/os-integration.adoc
@@ -28,7 +28,7 @@ The above methods allow you to directly integrate Apache Karaf:
 
 ==== Service Wrapper
 
-The "Service Wrapper" correctly handles "user's log outs" under Windows, service dependencies, and the ability to run services which interact with the desktop.
+The "Service Wrapper" correctly handles "user log outs" under Windows, service dependencies, and the ability to run services which interact with the desktop.
 
 It also includes advanced fault detection software which monitors an application.
 The "Service Wrapper" is able to detect crashes, freezes, out of memory and other exception events, then automatically react by restarting Apache Karaf with a minimum of delay.
@@ -84,7 +84,7 @@ OPTIONS
                 (defaults to )
 ----
 
-The `wrapper:install` command detects the running Operating Service and provide the service/daemon ready to be integrated in your system.
+The `wrapper:install` command detects the running Operating Service and provides the service/daemon ready to be integrated in your system.
 
 For instance, on a Ubuntu/Debian Linux system:
 
@@ -138,7 +138,7 @@ ln -s /opt/apache-karaf-4.0.0/bin/karaf-service /etc/init.d/
 update-rc.d karaf-service defaults
 ----
 
-Karaf also supports systemd service, so you can use systemctl instead of SystemV based service:
+Karaf also supports the systemd service, so you can use systemctl instead of a SystemV based service:
 
 ----
 systemctl enable /opt/apache-karaf-4.0.2/bin/karaf.service
@@ -148,7 +148,7 @@ This will enable Karaf at system boot.
 
 ===== Uninstall
 
-The `wrapper:install` provides the system commands to perform to uninstall the service/daemon).
+The `wrapper:install` provides the system commands to perform to uninstall the service/daemon.
 
 For instance, on Ubuntu/Debian, to uninstall the Apache Karaf service, you have to remove the `karaf-service` script from the runlevel scripts:
 
@@ -170,7 +170,7 @@ karaf@root()> feature:uninstall service-wrapper
 
 ===== Note for MacOS users
 
-On MacOS you can install the service for an user or for the system.
+On MacOS you can install the service for a user or for the system.
 
 If you want to add `bin/org.apache.karaf.KARAF` as user service move this file into `~/Library/LaunchAgents/`:
 
@@ -218,7 +218,7 @@ The `bin/setenv` Unix script (`bin\setenv.bat` on Windows) is not used by the Ap
 
 To configure Apache Karaf started by the Service Wrapper, you have to tune the `etc/karaf-wrapper.conf` file. If you provided the `name` option to the `wrapper:install` command, the file is `etc/karaf-yourname.conf`.
 
-In this file, you can configure the different environment variables used by Apache Karaf. The Service Wrapper installer automatically populate these variables for you during the installation (using `wrapper:install` command).
+In this file, you can configure the different environment variables used by Apache Karaf. The Service Wrapper installer automatically populates these variables for you during the installation (using `wrapper:install` command).
 For instance:
 
 * `set.default.JAVA_HOME` is the `JAVA_HOME` used to start Apache Karaf (populated during Service Wrapper installation).
@@ -237,9 +237,9 @@ For instance:
 * `wrapper.ntservice.name` is Windows service specific and defines the Windows service name. It's set to the `name` option of the `wrapper:install` command, or `karaf` by default.
 * `wrapper.ntservice.displayname` is Windows service specific and defines the Windows service display name. It's set to the `display` option of the `wrapper:install` command, or `karaf` by default.
 * `wrapper.ntservice.description` is Windows service specific and defines the Windows service description. It's set to the `description` option of the `wrapper:install` command, or empty by default.
-* `wrapper.ntservice.starttype` is Windows service specific and defines if the Windows service is started automatically with the service, or just on demand. It's set to `AUTO_START` by default, and could be switch to `DEMAND_START`.
+* `wrapper.ntservice.starttype` is Windows service specific and defines if the Windows service is started automatically with the service, or just on demand. It's set to `AUTO_START` by default, and could be switched to `DEMAND_START`.
 
-This is a example of generated `etc/karaf-wrapper.conf` file:
+This is a example of the generated `etc/karaf-wrapper.conf` file:
 
 ----
 # ------------------------------------------------------------------------
@@ -385,7 +385,7 @@ By using the "Service Script Templates", you can run Apache Karaf with the help
 
 [NOTE]
 ====
-As opposite of Service Wrapper, the templates targeting Unix system do not rely on a 3th party binaries
+As opposed to the Service Wrapper, the templates targeting Unix systems do not rely on 3rd party binaries
 ====
 
 You can find these templates under the bin/contrib directory.
@@ -394,7 +394,7 @@ You can find these templates under the bin/contrib directory.
 
 ===== Unix
 
-The karaf-service.sh utility helps you to generate ready to use scripts by automatically identify the operating system, the default init system and the template to use.
+The karaf-service.sh utility helps you to generate ready to use scripts by automatically identifying the operating system, the default init system and the template to use.
 
 [NOTE]
 ====
@@ -423,7 +423,7 @@ Command line option, Environment variable, Description
 
 ===== Systemd
 
-When karaf-service.sh detect Systemd, it generates three files:
+When karaf-service.sh detects Systemd, it generates three files:
 
 - a systemd unit file to manage the root Apache Karaf container
 - a systemd environment file with variables used by the root Apache Karaf container
@@ -446,7 +446,7 @@ $ systemctl enable karaf-4.service
 
 ===== SysV
 
-When karaf-service.sh detect a SysV system, it generates two files:
+When karaf-service.sh detects a SysV system, it generates two files:
 
 - an init script to manage the root Apache Karaf container
 - an environment file with variables used by the root Apache Karaf container
@@ -470,7 +470,7 @@ To enable service startup upon boot, please consult your operating system init g
 
 ===== Solaris SMF
 
-When karaf-service.sh detect a Solaris system, it generates a single file:
+When karaf-service.sh detects a Solaris system, it generates a single file:
 
 .Example
 ....
diff --git a/manual/src/main/asciidoc/user-guide/provisioning.adoc b/manual/src/main/asciidoc/user-guide/provisioning.adoc
index 255e755..217bc14 100644
--- a/manual/src/main/asciidoc/user-guide/provisioning.adoc
+++ b/manual/src/main/asciidoc/user-guide/provisioning.adoc
@@ -18,7 +18,7 @@ Apache Karaf supports the provisioning of applications and modules using the con
 
 ==== Application
 
-By provisioning application, it means install all modules, configuration, and transitive applications.
+By provisioning an application, it means to install all modules, configuration, and transitive applications.
 
 ==== OSGi
 
@@ -26,14 +26,14 @@ It natively supports the deployment of OSGi applications.
 
 An OSGi application is a set of OSGi bundles. An OSGi bundle is a regular jar file, with additional metadata in the jar MANIFEST.
 
-In OSGi, a bundle can depend to other bundles. So, it means that to deploy an OSGi application, most of the time, you have
+In OSGi, a bundle can depend on other bundles. So, it means that to deploy an OSGi application, most of the time, you have
 to firstly deploy a lot of other bundles required by the application.
 
 So, you have to find these bundles first, install the bundles. Again, these "dependency" bundles may require other bundles
 to satisfy their own dependencies.
 
 More over, typically, an application requires configuration (see the [Configuration section|configuration] of the user guide).
-So, before being able to start your application, in addition of the dependency bundles, you have to create or deploy the
+So, before being able to start your application, in addition to the dependency bundles, you have to create or deploy the
 configuration.
 
 As we can see, the provisioning of an application can be very long and fastidious.
@@ -54,9 +54,9 @@ A feature describes an application as:
 * optionally a set of dependency features
 
 When you install a feature, Apache Karaf installs all resources described in the feature. It means that it will
-automatically resolves and installs all bundles, configurations, and dependency features described in the feature.
+automatically resolve and install all bundles, configuration, and dependency features described in the feature.
 
-The feature resolver checks the service requirements, and install the bundles providing the services matching the requirements.
+The feature resolver checks the service requirements, and installs the bundles providing the services matching the requirements.
 The default mode enables this behavior only for "new style" features repositories (basically, the features repositories XML with
 schema equal or greater to 1.3.0). It doesn't apply for "old style" features repositories (coming from Karaf 2 or 3).
 
@@ -72,7 +72,7 @@ The possible values are:
 * default: service requirements are ignored for "old style" features repositories, and enabled for "new style" features repositories.
 * enforce: service requirements are always verified, for "old style" and "new style" features repositories.
 
-Additionally, a feature can also define requirements. In that case, Karaf can automatically additional bundles or features
+Additionally, a feature can also define requirements. In that case, Karaf can automatically install additional bundles or features
 providing the capabilities to satisfy the requirements.
 
 A feature has a complete lifecycle: install, start, stop, update, uninstall.
@@ -103,21 +103,21 @@ We can note that the features XML has a schema. Take a look on [Features XML Sch
 for details.
 The `feature1` feature is available in version `1.0.0`, and contains two bundles. The `<bundle/>` element contains a URL
 to the bundle artifact (see [Artifacts repositories and URLs section|urls] for details). If you install the `feature1` feature
-(using `feature:install` or the FeatureMBean as described later), Apache Karaf will automatically installs the two bundles
+(using `feature:install` or the FeatureMBean as described later), Apache Karaf will automatically install the two bundles
 described.
 The `feature2` feature is available in version `1.1.0`, and contains a reference to the `feature1` feature and a bundle.
 The `<feature/>` element contains the name of a feature. A specific feature version can be defined using the `version`
 attribute to the `<feature/>` element (`<feature version="1.0.0">feature1</feature>`). If the `version` attribute is
 not specified, Apache Karaf will install the latest version available. If you install the `feature2` feature (using `feature:install`
-or the FeatureMBean as described later), Apache Karaf will automatically installs `feature1` (if it's not already installed)
+or the FeatureMBean as described later), Apache Karaf will automatically install `feature1` (if it's not already installed)
 and the bundle.
 
 A feature repository is registered using the URL to the features XML file.
 
 The features state is stored in the Apache Karaf cache (in the `KARAF_DATA` folder). You can restart Apache Karaf, the
 previously installed features remain installed and available after restart.
-If you do a clean restart or you delete the Apache Karaf cache (delete the `KARAF_DATA` folder), all previously features
-repositories registered and features installed will be lost: you will have to register the features repositories and install
+If you do a clean restart or you delete the Apache Karaf cache (delete the `KARAF_DATA` folder), all previously registered features
+repositories and features installed will be lost: you will have to register the features repositories and install
 features by hand again.
 To prevent this behaviour, you can specify features as boot features.
 
@@ -171,7 +171,7 @@ would overide pax-logging-service 1.8.3 but not 1.8.6 or 1.7.0.
 By default, the bundles deployed by a feature will have a start-level equals to the value defined in the `etc/config.properties`
 configuration file, in the `karaf.startlevel.bundle` property.
 
-This value can be "overrided" by the `start-level` attribute of the `<bundle/>` element, in the features XML.
+This value can be "overridden" by the `start-level` attribute of the `<bundle/>` element, in the features XML.
 
 ----
   <feature name="my-project" version="1.0.0">
@@ -180,7 +180,7 @@ This value can be "overrided" by the `start-level` attribute of the `<bundle/>`
   </feature>
 ----
 
-The start-level attribute insure that the `myproject-dao` bundle is started before the bundles that use it.
+The start-level attribute insures that the `myproject-dao` bundle is started before the bundles that use it.
 
 Instead of using start-level, a better solution is to simply let the OSGi framework know what your dependencies are by
 defining the packages or services you need. It is more robust than setting start levels.
@@ -209,7 +209,7 @@ This information can be used by resolvers to compute the full list of bundles to
 
 ==== Dependent features
 
-A feature can depend to a set of other features:
+A feature can depend on a set of other features:
 
 ----
   <feature name="my-project" version="1.0.0">
@@ -240,7 +240,7 @@ To specify an exact version, use a closed range such as `[3.1,3.1]`.
 
 ===== Feature prerequisites
 
-Prerequisite feature is special kind of dependency. If you will add `prerequisite` attribute to dependant feature tag then it will force installation and also activation of bundles in dependant feature before installation of actual feature. This may be handy in case if bundles enlisted in given feature are not using pre installed URL such `wrap` or `war`.
+A prerequisite feature is a special kind of dependency. If you add the `prerequisite` attribute to dependant feature tag then it will force installation and also activation of bundles in the dependant feature before the installation of the actual feature. This may be handy in the case that bundles enlisted in a given feature are not using pre installed URLs such as `wrap` or `war`.
 
 ==== Feature configurations
 
@@ -297,13 +297,13 @@ For instance, a feature can contain:
 
 The requirement specifies that the feature will work by only if the JDK version is not 1.8 (so basically 1.7).
 
-The features resolver is also able to refresh the bundles when an optional dependency is satisfy, rewiring the optional import.
+The features resolver is also able to refresh the bundles when an optional dependency is satisfied, rewiring the optional import.
 
 ==== Commands
 
 ===== `feature:repo-list`
 
-The `feature:repo-list` command lists all registered features repository:
+The `feature:repo-list` command lists all registered feature repositories:
 
 ----
 karaf@root()> feature:repo-list
@@ -588,7 +588,7 @@ Done.
 ----
 
 If a feature contains a bundle which is already installed, by default, Apache Karaf will refresh this bundle.
-Sometime, this refresh can cause issue to other running applications. If you want to disable the auto-refresh of installed
+Sometime, this refresh can cause an issue with other running applications. If you want to disable the auto-refresh of installed
 bundles, you can use the `-r` option:
 
 ----
@@ -615,7 +615,7 @@ When starting a feature, all bundles are started, and so, the feature also expos
 
 ===== `feature:stop`
 
-You can also stop a feature: it means that all services provided by the feature will be stop and removed from the service registry. However, the packages
+You can also stop a feature: it means that all services provided by the feature will be stopped and removed from the service registry. However, the packages
 are still available for the wiring (the bundles are in resolved state).
 
 ===== `feature:uninstall`
@@ -638,6 +638,7 @@ You can "hot deploy" a features XML by dropping the file directly in the `deploy
 Apache Karaf provides a features deployer.
 
 When you drop a features XML in the deploy folder, the features deployer does:
+
 * register the features XML as a features repository
 * the features with `install` attribute set to "auto" will be automatically installed by the features deployer.
 
diff --git a/manual/src/main/asciidoc/user-guide/remote.adoc b/manual/src/main/asciidoc/user-guide/remote.adoc
index 4deed60..4e8522b 100644
--- a/manual/src/main/asciidoc/user-guide/remote.adoc
+++ b/manual/src/main/asciidoc/user-guide/remote.adoc
@@ -27,7 +27,7 @@ This remote console provides all the features of the "local" console, and gives
 container and services running inside of it. As the "local" console, the remote console is secured by a RBAC mechanism
 (see the link:security[Security section] of the user guide for details).
 
-In addition of the remote console, Apache Karaf also provides a remote filesystem. This remote filesystem can be accessed
+In addition to the remote console, Apache Karaf also provides a remote filesystem. This remote filesystem can be accessed
 using a SCP/SFTP client.
 
 ===== Configuration
@@ -88,9 +88,9 @@ hostKey = ${karaf.etc}/host.key
 #hostKeyPub = ${karaf.etc}/host.key.pub
 
 #
-# Role name used for SSH access authorization
+# sshRole defines the role required to access the console through ssh
 #
-# sshRole = admin
+# sshRole = ssh
 
 #
 # Defines if the SFTP system is enabled or not in the SSH server
@@ -126,17 +126,17 @@ The `etc/org.apache.karaf.shell.cfg` configuration file contains different prope
 
 * `sshPort` is the port number where the SSHd server is bound (by default, it's 8101).
 * `sshHost` is the address of the network interface where the SSHd server is bound. The default value is 0.0.0.0,
- meaning that the SSHd server is bound on all network interfaces. You can bind on a target interface providing the IP
+ meaning that the SSHd server is bound on all network interfaces. You can bind on a target interface by providing the IP
  address of the network interface.
 * `hostKey` is the location of the `host.key` file. By defaut, it uses `etc/host.key`. This file stores the 
  private key of the SSHd server.
 * `sshRole` is the default role used for SSH access. See the [Security section|security] of this user guide for details.
-* `sftpEnabled` controls if the SSH server start the SFTP system or not. When enabled, Karaf SSHd supports SFTP, meaning
- that you can remotely access to the Karaf filesystem with any sftp clients.
+* `sftpEnabled` controls if the SSH server starts the SFTP system or not. When enabled, Karaf SSHd supports SFTP, meaning
+ that you can remotely access the Karaf filesystem with any sftp client.
 * `keySize` is the key size used by the SSHd server. The possible values are 1024, 2048, 3072, or 4096. The default
- value is 1024.
-* `algorithm` is the host key algorithm used by the SSHd server. The possible values are DSA or RSA. The default
- value is DSA.
+ value is 2048.
+* `algorithm` is the host key algorithm used by the SSHd server. The possible values are DSA, EC or RSA. The default
+ value is RSA.
 
 The SSHd server configuration can be changed at runtime:
 
@@ -156,9 +156,9 @@ The Apache Karaf SSHd server supports key/agent authentication and password auth
 
 ====== System native clients
 
-The Apache Karaf SSHd server is a pure SSHd server, similar to OpenSSH daemon.
+The Apache Karaf SSHd server is a pure SSHd server, similar to a OpenSSH daemon.
 
-It means that you can use directly a SSH client from your system.
+It means that you can directly use a SSH client from your system.
 
 For instance, on Unix, you can directly use OpenSSH:
 
@@ -187,7 +187,7 @@ karaf@root()>
 
 On Windows, you can use Putty, Kitty, etc.
 
-If you don't have SSH client installed on your machine, you can use Apache Karaf client.
+If you don't have a SSH client installed on your machine, you can use the Apache Karaf client.
 
 ====== `ssh:ssh` command
 
@@ -260,7 +260,7 @@ Connecting to host localhost on port 8101
 Connected
 ----
 
-As the `ssh:ssh` command is a pure SSH client, so it means that you can connect to a Unix OpenSSH daemon:
+As the `ssh:ssh` command is a pure SSH client, it means that you can connect to a Unix OpenSSH daemon:
 
 ----
 karaf@root()> ssh:ssh user@localhost
@@ -275,9 +275,9 @@ user@server:~$
 
 ====== Apache Karaf client
 
-The `ssh:ssh` command requires to be run into a running Apache Karaf console.
+The `ssh:ssh` command can only be run in a running Apache Karaf console.
 
-For commodity, the `ssh:ssh` command is "wrapped" as a standalone client: the `bin/client` Unix script (`bin\client.bat` on Windows).
+For convenience, the `ssh:ssh` command is "wrapped" as a standalone client: the `bin/client` Unix script (`bin\client.bat` on Windows).
 
 ----
 bin/client --help
@@ -295,7 +295,7 @@ Apache Karaf client
 If no commands are specified, the client will be put in an interactive mode
 ----
 
-For instance, to connect to local Apache Karaf instance (on the default SSHd server 8101 port), you can directly use
+For instance, to connect to a local Apache Karaf instance (on the default SSHd server 8101 port), you can directly use
 `bin/client` Unix script (`bin\client.bat` on Windows) without any argument or option:
 
 ----
@@ -348,13 +348,13 @@ Last login: Tue Dec  3 18:18:31 2013 from localhost
 
 When you are connected to a remote Apache Karaf console, you can logout using:
 
-* using CTRL-D key binding. Note that CTRL-D just logout from the remote console in this case, it doesn't shutdown
+* using CTRL-D key binding. Note that CTRL-D just logs out from the remote console in this case, it doesn't shutdown
  the Apache Karaf instance (as CTRL-D does when used on a local console).
 * using `shell:logout` command (or simply `logout`)
 
-===== Filsystem clients
+===== Filesystem clients
 
-Apache Karaf SSHd server also provides complete fileystem access via SSH. For security reason, the available filesystem
+Apache Karaf SSHd server also provides complete fileystem access via SSH. For security reasons, the available filesystem
 is limited to `KARAF_BASE` directory.
 
 You can use this remote filesystem with any SCP/SFTP compliant clients.
@@ -381,7 +381,7 @@ On Windows, you can use WinSCP to access the Apache Karaf filesystem.
 
 It's probably easier to use a SFTP complient client.
 
-For instance, on Unix system, you can use `lftp` or `ncftp`:
+For instance, on a Unix system, you can use `lftp` or `ncftp`:
 
 ----
 $ lftp
@@ -404,13 +404,13 @@ drwxr-xr-x   1 jbonofre jbonofre     4096 Dec  3 12:51 system
 lftp karaf@localhost:/>
 ----
 
-You can also use graphic client like `filezilla`, `gftp`, `nautilus`, etc.
+You can also use a graphic client like `filezilla`, `gftp`, `nautilus`, etc.
 
 On Windows, you can use `filezilla`, `WinSCP`, etc.
 
 ====== Apache Maven
 
-Apache Karaf `system` folder is the Karaf repository, that use a Maven directory structure. It's where Apache Karaf
+The Apache Karaf `system` folder is the Karaf repository, that uses a Maven directory structure. It's where Apache Karaf
 looks for the artifacts (bundles, features, kars, etc).
 
 Using Apache Maven, you can populate the `system` folder using the `deploy:deploy-file` goal.
diff --git a/manual/src/main/asciidoc/user-guide/scheduler.adoc b/manual/src/main/asciidoc/user-guide/scheduler.adoc
index cb23c18..c06ea4e 100644
--- a/manual/src/main/asciidoc/user-guide/scheduler.adoc
+++ b/manual/src/main/asciidoc/user-guide/scheduler.adoc
@@ -297,4 +297,4 @@ org.quartz.jobStore.dataSource=scheduler
 org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
 ----
 
-Then several Karaf instances scheduler will share the same JDBC job store and can work in a "cluster" way.
+Then several Karaf instances scheduler will share the same JDBC job store and can work in a "clustered" way.
diff --git a/manual/src/main/asciidoc/user-guide/security.adoc b/manual/src/main/asciidoc/user-guide/security.adoc
index 4533df8..e6005eb 100644
--- a/manual/src/main/asciidoc/user-guide/security.adoc
+++ b/manual/src/main/asciidoc/user-guide/security.adoc
@@ -34,7 +34,7 @@ Apache Karaf is able to manage multiple realms. A realm contains the definition
 authentication and/or authorization on this realm. The login modules define the authentication and authorization for
 the realm.
 
-The `jaas:realm-list` command list the current defined realms:
+The `jaas:realm-list` command lists the current defined realms:
 
 ----
 karaf@root()> jaas:realm-list
@@ -113,7 +113,7 @@ The default password is `karaf`.
 
 The `karaf` user is member of one group: the `admingroup`.
 
-A group is always prefixed by `_g_:`. An entry without this prefix is an user.
+A group is always prefixed by `_g_:`. An entry without this prefix is a user.
 
 A group defines a set of roles. By default, the `admingroup` defines `group`, `admin`, `manager`, and `viewer`
 roles.
@@ -128,7 +128,7 @@ The `jaas:*` commands manage the realms, users, groups, roles in the console.
 
 We already used the `jaas:realm-list` previously in this section.
 
-The `jaas:realm-list` command list the realm and the login modules for each realm:
+The `jaas:realm-list` command lists the realm and the login modules for each realm:
 
 ----
 karaf@root()> jaas:realm-list
@@ -201,7 +201,7 @@ On the other hand, if you want to rollback the user addition, you can use the `j
 
 ====== `jaas:user-delete`
 
-The `jaas:user-delete` command deletes an user from the currently edited login module:
+The `jaas:user-delete` command deletes a user from the currently edited login module:
 
 ----
 karaf@root()> jaas:user-delete foo
@@ -222,7 +222,7 @@ karaf     | admingroup | viewer
 
 ====== `jaas:group-add`
 
-The `jaas:group-add` command assigns a group (and eventually creates the group) to an user in the currently edited login module:
+The `jaas:group-add` command assigns a group (and eventually creates the group) to a user in the currently edited login module:
 
 ----
 karaf@root()> jaas:group-add karaf mygroup
@@ -230,7 +230,7 @@ karaf@root()> jaas:group-add karaf mygroup
 
 ====== `jaas:group-delete`
 
-The `jaas:group-delete` command removes an user from a group in the currently edited login module:
+The `jaas:group-delete` command removes a user from a group in the currently edited login module:
 
 ----
 karaf@root()> jaas:group-delete karaf mygroup
@@ -333,7 +333,7 @@ encryption.encoding = hexadecimal
 
 If the `encryption.enabled` property is set to true, the password encryption is enabled.
 
-With encryption enabled, the password are encrypted at the first time an user logs in. The encrypted passwords are
+With encryption enabled, the password are encrypted at the first time a user logs in. The encrypted passwords are
 prefixed and suffixed with `\{CRYPT\`}. To re-encrypt the password, you can reset the password in clear (in `etc/users.properties`
 file), without the `\{CRYPT\`} prefix and suffix. Apache Karaf will detect that this password is in clear (because it's not
 prefixed and suffixed with `\{CRYPT\`}) and encrypt it again.
@@ -351,8 +351,8 @@ The `etc/org.apache.karaf.jaas.cfg` configuration file allows you to define adva
 
 For the SSH layer, Karaf supports the authentication by key, allowing to login without providing the password.
 
-The SSH client (so bin/client provided by Karaf itself, or any ssh client like OpenSSH) uses a public/private keys pair that
-will identify himself on Karaf SSHD (server side).
+The SSH client (so bin/client provided by Karaf itself, or any ssh client like OpenSSH) uses a public/private keypair that
+will identify itself on Karaf SSHD (server side).
 
 The keys allowed to connect are stored in `etc/keys.properties` file, following the format:
 
@@ -464,7 +464,7 @@ By default, Apache Karaf defines the following commands ACLs:
  `org.apache.karaf.service.acl.*` configuration PID to the users with `admin` role. For the other configuration PID,
  the users with the `manager` role can execute `config:*` commands.
 * `etc/org.apache.karaf.command.acl.feature.cfg` configuration file defines the ACL for `feature:*` commands.
- Only the users with `admin` role can execute `feature:install` and `feature:uninstall` commands. The other `feature:*`
+ Only the users with `admin` role can execute `feature:install`, `feature:uninstall`,`feature:start`, `feature:stop` and `feature:update` commands. The other `feature:*`
  commands can be executed by any user.
 * `etc/org.apache.karaf.command.acl.jaas.cfg` configuration file defines the ACL for `jaas:*` commands.
  Only the users with `admin` role can execute `jaas:update` command. The other `jaas:*` commands can be executed by any
@@ -475,11 +475,14 @@ By default, Apache Karaf defines the following commands ACLs:
 * `etc/org.apache.karaf.command.acl.shell.cfg` configuration file defines the ACL for `shell:*` and "direct" commands.
  Only the users with `admin` role can execute `shell:edit`, `shell:exec`, `shell:new`, and `shell:java` commands.
  The other `shell:*` commands can be executed by any user.
+* `etc/org.apache.karaf.command.acl.system.cfg` configuration file defines the ACL for `system:*` commands.
+ Only the users with `admin` role can execute `system:property` and `system:shutdown` commands. Users with `manager` role can call `system:start-level` above 100, otherwise `admin` role is required. Also users with `viewer` role can obtain the current start-level.
+ The other `system:*` commands can be executed by any user.
 
 You can change these default ACLs, and add your own ACLs for additional command scopes (for instance `etc/org.apache.karaf.command.acl.cluster.cfg` for
 Apache Karaf Cellar, `etc/org.apache.karaf.command.acl.camel.cfg` from Apache Camel, ...).
 
-You can fine tuned the command RBAC support by editing the `karaf.secured.services` property in `etc/system.properties`:
+You can fine tune the command RBAC support by editing the `karaf.secured.services` property in `etc/system.properties`:
 
 ----
 #
@@ -534,7 +537,7 @@ karaf@root()> feature:install webconsole
 
 The WebConsole doesn't support fine grained RBAC like console or JMX for now.
 
-All users with the `admin` role can logon the WebConsole and perform any operations.
+All users with the `admin` role can logon to the WebConsole and perform any operations.
 
 ==== SecurityMBean
 
@@ -565,6 +568,7 @@ policy configuration (`$JAVA_HOME/jre/lib/security/java.security`) in order to r
 While this approach works fine, it has a global effect and requires you to configure all your servers accordingly.
 
 Apache Karaf offers a simple way to configure additional security providers:
+
 * put your provider jar in `lib/ext`
 * modify the `etc/config.properties` configuration file to add the following property
 
diff --git a/manual/src/main/asciidoc/user-guide/start-stop.adoc b/manual/src/main/asciidoc/user-guide/start-stop.adoc
index b463b9b..8b49969 100644
--- a/manual/src/main/asciidoc/user-guide/start-stop.adoc
+++ b/manual/src/main/asciidoc/user-guide/start-stop.adoc
@@ -16,11 +16,11 @@
 
 ==== Start
 
-Apache Karaf supports different start mode:
+Apache Karaf supports different start modes:
 
-* the "regular" mode starts Apache Karaf in foreground, including the shell console.
-* the "server" mode starts Apache Karaf in foreground, without the shell console.
-* the "background" mode starts Apache Karaf in background.
+* the "regular" mode starts Apache Karaf in the foreground, including the shell console.
+* the "server" mode starts Apache Karaf in the foreground, without the shell console.
+* the "background" mode starts Apache Karaf in the background.
 
 You can also manage Apache Karaf as a system service (see link:wrapper[System Service] section).
 
@@ -156,7 +156,7 @@ Apache Karaf accepts environment variables:
 * `JAVA_MAX_MEM`: maximum memory for the JVM (default is 512M).
 * `JAVA_PERM_MEM`: minimum perm memory for the JVM (default is JVM default value).
 * `JAVA_MAX_PERM_MEM`: maximum perm memory for the JVM (default is JVM default value).
-* `KARAF_HOME`: the location of your Apache Karaf installation (default is found depending where you launch the startup script).
+* `KARAF_HOME`: the location of your Apache Karaf installation (default is found depending on where you launch the startup script).
 * `KARAF_BASE`: the location of your Apache Karaf base (default is `KARAF_HOME`).
 * `KARAF_DATA`: the location of your Apache Karaf data folder (default is `KARAF_BASE/data`).
 * `KARAF_ETC`: the location of your Apache Karaf etc folder (default is `KARAF_BASE/etc`).
@@ -303,15 +303,15 @@ karaf@root()>
 
 ==== Stop
 
-When you start Apache Karaf in regular mode, the `logout` command or CTRL-D key binding logout from the console and shutdown Apache Karaf.
+When you start Apache Karaf in regular mode, the `logout` command or CTRL-D key binding logs out from the console and shuts Apache Karaf down.
 
 When you start Apache Karaf in background mode (with the `bin/start` Unix script (`bin\start.bat` on Windows)), you can use the `bin/stop` Unix script (`bin\stop.bat` on Windows).
 
-More generally, you can use the `shutdown` command (on the Apache Karaf console) that work in any case.
+More generally, you can use the `shutdown` command (on the Apache Karaf console) that works for all cases.
 
 The `shutdown` command is very similar to the the `shutdown` Unix command.
 
-To shutdown Apache Karaf now, you can simple using `shutdown`:
+To shutdown Apache Karaf now, you can simply use `shutdown`:
 
 ----
 karaf@root()> shutdown -h
diff --git a/manual/src/main/asciidoc/user-guide/tuning.adoc b/manual/src/main/asciidoc/user-guide/tuning.adoc
index b7fd688..595544d 100644
--- a/manual/src/main/asciidoc/user-guide/tuning.adoc
+++ b/manual/src/main/asciidoc/user-guide/tuning.adoc
@@ -18,7 +18,7 @@
 
 Like any Java applications, Apache Karaf uses a JVM. An important feature of the JVM is the Garbage Collector.
 
-Apache Karaf default configuration is sized for small to medium needs and to work on most machine.
+Apache Karaf default configuration is sized for small to medium needs and to work on most machines.
 
 That's why this default configuration may appear like "small".
 
@@ -40,7 +40,7 @@ On IBM JVM and AIX system:
 
 For any container, it's always difficult to predict the usage of the resources and the behaviour of the artifacts deployed.
 
-Generally speaking, a good approach for tuning is to enable `-verbose:gc` and use tools like VisualVM to identify the potentials
+Generally speaking, a good approach for tuning is to enable `-verbose:gc` and use tools like VisualVM to identify the potential
 memory leaks, and see the possible optimisation of the spaces and GC.
 
 You can find introduction about GC here: [http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html].
@@ -75,7 +75,7 @@ You can find a good article about Java 7 tuning here: http://java-is-the-new-c.b
 
 ==== Threads
 
-In high loaded system, the number of threads can be very large.
+In a system under high load, the number of threads can be very large.
 
 ===== WebContainer
 
@@ -150,9 +150,9 @@ The `etc/jre.properties` defines the packages directly provided by the JVM.
 
 Most of the time, the default configuration in Apache Karaf is fine and works in most of the use cases.
 
-However, some times, you may want to not use the packages provided by the JVM, but the same packages provided by a bundle.
+However, some times, you may not want to use the packages provided by the JVM, but the same packages provided by a bundle.
 
 For instance, the JAXB version provided by the JVM is "old", and you want to use new JAXB bundles.
 
 In that case, you have to comment the packages in `etc/jre.properties` to avoid to be provided by the JVM and use the
-ones from the bundles.
\ No newline at end of file
+ones from the bundles.
diff --git a/manual/src/main/asciidoc/user-guide/urls.adoc b/manual/src/main/asciidoc/user-guide/urls.adoc
index f149f75..baa737f 100644
--- a/manual/src/main/asciidoc/user-guide/urls.adoc
+++ b/manual/src/main/asciidoc/user-guide/urls.adoc
@@ -46,8 +46,8 @@ The equivalent of the above bundle would be:
 
 In addition to being less verbose, the Maven url handlers can also resolve snapshots and can use a local copy of the jar if one is available in your Maven local repository.
 
-The `org.ops4j.pax.url.mvn` bundle resolves `mvn` URLs. It can be configured using the file `etc/org.ops4j.pax.url.cfg`.
-Full reference of `org.ops4j.pax.url.mvn` PID configuration can be found https://ops4j1.jira.com/wiki/display/paxurl/Aether+Configuration[on pax-web Wiki page].
+The `org.ops4j.pax.url.mvn` bundle resolves `mvn` URLs. It can be configured using the file `etc/org.ops4j.pax.url.mvn.cfg`.
+Full reference of `org.ops4j.pax.url.mvn` PID configuration can be found on https://ops4j1.jira.com/wiki/display/paxurl/Aether+Configuration[the pax-web Wiki page].
 
 The most important property is:
 
@@ -55,8 +55,8 @@ The most important property is:
 
 Two other significant properties are:
 
-* `org.ops4j.pax.url.mvn.defaulRepositories` : Comma separated list of locations that are checked before querying remote repositories. These can be treated as read-only repositories, as nothing is written there during artifact resolution.
-* `org.ops4j.pax.url.mvn.localRepository` : by default (implicitly) it's standard `~/.m2/repository` location. This
+* `org.ops4j.pax.url.mvn.defaultRepositories` : Comma separated list of locations that are checked before querying remote repositories. These can be treated as read-only repositories, as nothing is written there during artifact resolution.
+* `org.ops4j.pax.url.mvn.localRepository` : by default (implicitly) it's the standard `~/.m2/repository` location. This
   local repository is used to store artifacts downloaded from one of remote repositories, so at next resolution attempt
   no remote request is issued.
 
@@ -72,11 +72,11 @@ Repositories on the local machine are supported through `file:/` URLs.
 
 Full configuration of `org.ops4j.pax.url.mvn` bundle can be done using `org.ops4j.pax.url.mvn` PID (see `etc/org.ops4j.pax.url.mvn.cfg` file). This however may be cumbersome in some scenarios.
 
-In order to make user's life easier and provide more _domain_ oriented approach, Karaf provides several shell commands that makes Maven configuration easier.
+In order to make the user's life easier and to provide more _domain_ oriented approach, Karaf provides several shell commands that makes Maven configuration easier.
 
 ===== maven:summary
 
-This command shows quick summary about current `org.ops4j.pax.url.mvn` PID configuration. For example:
+This command shows a quick summary about current `org.ops4j.pax.url.mvn` PID configuration. For example:
 
 [source,options="nowrap"]
 ----
@@ -113,7 +113,7 @@ HTTP proxies              │ proxy.everfree.forest:3128
 
 ===== maven:repository-list
 
-This command displays all configured Maven repositories - in much more readable way than plain `config:proplist --pid org.ops4j.pax.url.mvn` command does.
+This command displays all configured Maven repositories - in a much more readable way than the plain `config:proplist --pid org.ops4j.pax.url.mvn` command does.
 
 [source,options="nowrap"]
 ----
@@ -136,14 +136,14 @@ kar.repository          │ file:/data/servers/apache-karaf-4.2.0-SNAPSHOT/data/
 child.system.repository │ file:/data/servers/apache-karaf-4.2.0-SNAPSHOT/system/   │ yes (daily) │ yes (daily)
 ----
 
-* `-v` option shows additional information about policies related to given repository
-* `-x` shows credentials for given repository (if defined)
+* `-v` option shows additional information about policies related to a given repository
+* `-x` shows credentials for a given repository (if defined)
 
 ===== maven:password
 
-`org.ops4j.pax.url.mvn` bundle uses Aether library to handle Maven resolution. It uses `settings.xml` file if
+`org.ops4j.pax.url.mvn` bundle uses the Aether library to handle Maven resolution. It uses the `settings.xml` file if
 credentials have to be used when accessing remote Maven repositories. This isn't done by `org.ops4j.pax.url.mvn`,
-but by Aether itself (or rather maven-settings library). When dealing with `settings.xml` file, passwords that
+but by Aether itself (or rather maven-settings library). When dealing with the `settings.xml` file, passwords that
 are stored there may need to be decrypted.
 Outside of Karaf, we can use `mvn -emp` and `mvn -ep` passwords and manually configure `~/.m2/settings-security.xml`
 file.
@@ -154,7 +154,7 @@ In order to use encrypted repository (or http proxy) passwords inside `settings.
 password_ stored inside `settings-security.xml` file. This file isn't usually present inside `~/.m2` directory and if
 there's a need to use it, one has to be created manually.
 
-Here's the way to encrypt Maven _master password_ (which is used to encrypt ordinary passwords for repository or http proxies):
+Here's the way to encrypt the Maven _master password_ (which is used to encrypt ordinary passwords for repository or http proxies):
 
 [source,options="nowrap"]
 ----
@@ -163,11 +163,11 @@ Master password to encrypt: *****
 Encrypted master password: {y+p9TiYuwVEHMHV14ej0Ni34zBnXXQrIOqjww/3Ro6U=}
 ----
 
-The above usage simply prints encrypted _master password_. We can however make this password persistent. This will
-result in new `settings-security.xml` file to be created and change in `org.ops4j.pax.url.mvn.security` property.
+The above usage simply prints the encrypted _master password_. We can however make this password persistent. This will
+result in the creation of a new `settings-security.xml` file and a change in the `org.ops4j.pax.url.mvn.security` property.
 
 NOTE: Karaf maven commands will never overwrite your current `~/.m2/settings.xml` or `~/.m2/settings-security.xml` files.
-If there's a need to change these files, maven commands will make a copy of existing file and set relevant `org.ops4j.pax.url.mvn` PID options
+If there's a need to change these files, maven commands will make a copy of the existing file and set relevant `org.ops4j.pax.url.mvn` PID options
 to point to new locations.
 
 [source,options="nowrap"]
@@ -209,9 +209,9 @@ default repositories:: These are read-only local repositories that are simply qu
 remote repositories:: These are well-known Maven remote repositories - usually accessible over http(s) protocol. Popular
  repositories are Sonatype Nexus or JFrog Artifactory.
 
-Both kinds of repositories may be created using `maven:repository-add` command.
+Both kinds of repositories may be created using the `maven:repository-add` command.
 
-Here's how default repository may be created:
+Here's how a default repository may be created:
 
 [source,options="nowrap"]
 ----
@@ -225,7 +225,7 @@ ID                      │ URL
 my.default.repository   │ file:/data/servers/apache-karaf-4.2.0-SNAPSHOT/special-repository/ │ yes (daily) │ yes (daily)
 ----
 
-For remote repository, we can specify more options (like credentials or update policies):
+For a remote repository, we can specify more options (like credentials or update policies):
 
 [source,options="nowrap"]
 ----
@@ -242,7 +242,7 @@ my.remote.repository            │ http://localhost/cloud-repository/
 ...
 ----
 
-In the above example, new `settings.xml` file was created. The reason is that although new repository itself was added
+In the above example, a new `settings.xml` file was created. The reason is that although a new repository itself was added
 to `org.ops4j.pax.url.mvn.repositories` property, the credentials had to be stored in `settings.xml` file:
 
 [source,options="nowrap"]
@@ -289,7 +289,7 @@ know about HTTP proxies to use. HTTP proxies *can't be configured* inside `etc/o
 be done in `settings.xml` and its location has to be set in `org.ops4j.pax.url.mvn.settings` PID property.
 
 `maven:http-proxy` command can be used to add/change/remove HTTP proxy definition. It automatically does a copy
-of existing `settings.xml` file and changes `org.ops4j.pax.url.mvn.settings` PID property.
+of the existing `settings.xml` file and changes `org.ops4j.pax.url.mvn.settings` PID property.
 
 For example:
 
@@ -316,7 +316,7 @@ ID       │ Host                  │ Port │ Non-proxy hosts           │ Us
 my.proxy │ proxy.everfree.forest │ 3128 │ 127.*|192.168.*|localhost │ discord  │ admin
 ----
 
-Here's summary of options for `maven:http-proxy` command:
+Here's a summary of options for the `maven:http-proxy` command:
 
 * `-id` identifier of HTTP proxy
 * `-add` / `--change` / `--remove` is an operation to perform on proxy
diff --git a/manual/src/main/asciidoc/user-guide/webcontainer.adoc b/manual/src/main/asciidoc/user-guide/webcontainer.adoc
index e743756..055db8e 100644
--- a/manual/src/main/asciidoc/user-guide/webcontainer.adoc
+++ b/manual/src/main/asciidoc/user-guide/webcontainer.adoc
@@ -14,7 +14,7 @@
 
 ==== WebContainer (JSP/Servlet)
 
-Apache Karaf can act a complete WebContainer, fully supporting JSP/Servlet specification.
+Apache Karaf can act as a complete WebContainer, fully supporting the JSP/Servlet specifications.
 
 Apache Karaf WebContainer supports both:
 
@@ -49,9 +49,9 @@ By default, Karaf creates an internal Jetty connector that you can configure via
 org.osgi.service.http.port=8181
 ```
 
-Note: if you want to use port numbers < 1024, remember you have to run with root privileges.
+Note: if you want to use port numbers < 1024, remember you have to run with root privileges. However note that this is not a good idea from a security point of view.
 
-It's possible to enable HTTPs "internal" connector. The first step is to create a keystore containing a server certificate.
+It's possible to enable the HTTPs "internal" connector. The first step is to create a keystore containing a server certificate.
 For instance the following command creates a keystore with a self-signed certificate:
 
 ```