You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@logging.apache.org by vy...@apache.org on 2020/07/11 17:52:52 UTC

[logging-log4j2] branch master updated: Update cloud.md: fixes and links

This is an automated email from the ASF dual-hosted git repository.

vy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/logging-log4j2.git


The following commit(s) were added to refs/heads/master by this push:
     new a900da1  Update cloud.md: fixes and links
a900da1 is described below

commit a900da1651b174304bc3aa6a4f7c20ad63533efd
Author: janmaterne <ja...@users.noreply.github.com>
AuthorDate: Fri Jul 3 14:31:55 2020 +0200

    Update cloud.md: fixes and links
---
 src/site/markdown/manual/cloud.md | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/src/site/markdown/manual/cloud.md b/src/site/markdown/manual/cloud.md
index 3eef8f8..e4ad5c1 100644
--- a/src/site/markdown/manual/cloud.md
+++ b/src/site/markdown/manual/cloud.md
@@ -48,7 +48,8 @@ per logging call vs 1.5 microseconds when writing to the file.
 1. When performing audit logging using a framework such as log4j-audit guaranteed delivery of the audit events
 is required. Many of the options for writing the output, including writing to the standard output stream, do
 not guarantee delivery. In these cases the event must be delivered to a "forwarder" that acknowledges receipt
-only when it has placed the event in durable storage, such as what Apache Flume or Apache Kafka will do.
+only when it has placed the event in durable storage, such as what [Apache Flume](https://flume.apache.org/) 
+or [Apache Kafka](https://kafka.apache.org/) will do.
 
 ## Logging Approaches
 
@@ -58,7 +59,7 @@ be used for reporting and alerting. There are many ways to forward and collect e
 log analysis tools. 
 
 Note that any approach that bypasses Docker's logging drivers requires Log4j's 
-[Docker Loookup](lookups.html#DockerLookup) to allow Docker attributes to be injected into the log events.  
+[Docker Lookup](lookups.html#DockerLookup) to allow Docker attributes to be injected into the log events.  
 
 ### Logging to the Standard Output Stream
 
@@ -90,9 +91,9 @@ delivered so this method should not be used if a highly available solution is re
 ### Logging to a File
 
 While this is not the recommended 12-Factor approach, it performs very well. However, it requires that the 
-application declare a volume where the log files will reside and then configure the log forwarder to tail 
+application declares a volume where the log files will reside and then configures the log forwarder to tail 
 those files. Care must also be taken to automatically manage the disk space used for the logs, which Log4j 
-can perform via the Delete action on the [RollingFileAppender](appenders.html#RollingFileAppender).
+can perform via the "Delete" action on the [RollingFileAppender](appenders.html#RollingFileAppender).
 
 ![File](../images/DockerLogFile.png "Logging to a File")
 
@@ -400,7 +401,7 @@ Log4j's Kubernetes support may also be found at [Log4j-Kubernetes](../log4j-kube
 
 ## Appender Performance
 The numbers in the table below represent how much time in seconds was required for the application to 
-call logger.debug 100,000 times. These numbers only include the time taken to deliver to the specifically 
+call `logger.debug(...)` 100,000 times. These numbers only include the time taken to deliver to the specifically 
 noted endpoint and many not include the actual time required before they are available for viewing. All 
 measurements were performed on a MacBook Pro with a 2.9GHz Intel Core I9 processor with 6 physical and 12 
 logical cores, 32GB of 2400 MHz DDR4 RAM, and 1TB of Apple SSD storage. The VM used by Docker was managed 
@@ -472,4 +473,4 @@ be kept to a minimum since it is much slower than sending buffered events.
 1. Logging to files within the container is discouraged. Doing so requires that a volume be declared in 
 the Docker configuration and that the file be tailed by a log forwarder. However, it performs 
 better than logging to the standard output stream. If logging via TCP is not an option and
-proper multiline handling is required then consider this option.
\ No newline at end of file
+proper multiline handling is required then consider this option.