You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@logging.apache.org by gg...@apache.org on 2015/08/31 19:58:13 UTC
[1/2] logging-log4j2 git commit: [LOG4J2-1107] New Appender for
Apache Kafka.
Repository: logging-log4j2
Updated Branches:
refs/heads/master 546f4d043 -> 53320c029
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/src/site/xdoc/manual/appenders.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/manual/appenders.xml b/src/site/xdoc/manual/appenders.xml
index 6c3f5f2..a99ad88 100644
--- a/src/site/xdoc/manual/appenders.xml
+++ b/src/site/xdoc/manual/appenders.xml
@@ -1,3209 +1,3293 @@
-<?xml version="1.0"?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<document xmlns="http://maven.apache.org/XDOC/2.0"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
- <properties>
- <title>Log4j 2 Appenders</title>
- <author email="rgoers@apache.org">Ralph Goers</author>
- <author email="ggrgeory@apache.org">Gary Gregory</author>
- <author email="nickwilliams@apache.org">Nick Williams</author>
- </properties>
-
- <body>
- <section name="Appenders">
- <p>
- Appenders are responsible for delivering LogEvents to their destination. Every Appender must
- implement the <a href="../log4j-core/apidocs/org/apache/logging/log4j/core/Appender.html">Appender</a>
- interface. Most Appenders will extend
- <a href="../log4j-core/apidocs/org/apache/logging/log4j/core/appender/AbstractAppender.html">AbstractAppender</a>
- which adds <a href="../log4j-core/apidocs/org/apache/logging/log4j/core/LifeCycle.html">Lifecycle</a>
- and <a href="../log4j-core/apidocs/org/apache/logging/log4j/core/filter/Filterable.html">Filterable</a>
- support. Lifecycle allows components to finish initialization after configuration has completed and to
- perform cleanup during shutdown. Filterable allows the component to have Filters attached to it which are
- evaluated during event processing.
- </p>
- <p>
- Appenders usually are only responsible for writing the event data to the target destination. In most cases
- they delegate responsibility for formatting the event to a <a href="layouts.html">layout</a>. Some
- appenders wrap other appenders so that they can modify the LogEvent, handle a failure in an Appender,
- route the event to a subordinate Appender based on advanced Filter criteria or provide similar functionality
- that does not directly format the event for viewing.
- </p>
- <p>
- Appenders always have a name so that they can be referenced from Loggers.
- </p>
- <a name="AsyncAppender"/>
- <subsection name="AsyncAppender">
- <p>The AsyncAppender accepts references to other Appenders and causes LogEvents to be written to them
- on a separate Thread. Note that exceptions while writing to those Appenders will be hidden from
- the application. The AsyncAppender should be configured after the appenders it references to allow it
- to shut down properly.</p>
- <table>
- <caption align="top">AsyncAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>AppenderRef</td>
- <td>String</td>
- <td>The name of the Appenders to invoke asynchronously. Multiple AppenderRef
- elements can be configured.</td>
- </tr>
- <tr>
- <td>blocking</td>
- <td>boolean</td>
- <td>If true, the appender will wait until there are free slots in the queue. If false, the event
- will be written to the error appender if the queue is full. The default is true.</td>
- </tr>
- <tr>
- <td>bufferSize</td>
- <td>integer</td>
- <td>Specifies the maximum number of events that can be queued. The default is 128.</td>
- </tr>
- <tr>
- <td>errorRef</td>
- <td>String</td>
- <td>The name of the Appender to invoke if none of the appenders can be called, either due to errors
- in the appenders or because the queue is full. If not specified then errors will be ignored.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
- may be used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- <tr>
- <td>includeLocation</td>
- <td>boolean</td>
- <td>Extracting location is an expensive operation (it can make
- logging 5 - 20 times slower). To improve performance, location is
- not included by default when adding a log event to the queue.
- You can change this by setting includeLocation="true".</td>
- </tr>
- </table>
- <p>
- A typical AsyncAppender configuration might look like:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <File name="MyFile" fileName="logs/app.log">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- </File>
- <Async name="Async">
- <AppenderRef ref="MyFile"/>
- </Async>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="Async"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="ConsoleAppender"/>
- <subsection name="ConsoleAppender">
- <p>
- As one might expect, the ConsoleAppender writes its output to either System.err or System.out with System.err
- being the default target. A Layout must be provided to format the LogEvent.
- </p>
- <table>
- <caption align="top">ConsoleAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
- may be used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>layout</td>
- <td>Layout</td>
- <td>The Layout to use to format the LogEvent. If no layout is supplied the default pattern layout
- of "%m%n" will be used.</td>
- </tr>
- <tr>
- <td>follow</td>
- <td>boolean</td>
- <td>Identifies whether the appender honors reassignments of System.out or System.err
- via System.setOut or System.setErr made after configuration. Note that the follow
- attribute cannot be used with Jansi on Windows.</td>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- <tr>
- <td>target</td>
- <td>String</td>
- <td>Either "SYSTEM_OUT" or "SYSTEM_ERR". The default is "SYSTEM_ERR".</td>
- </tr>
- </table>
- <p>
- A typical Console configuration might look like:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Console name="STDOUT" target="SYSTEM_OUT">
- <PatternLayout pattern="%m%n"/>
- </Console>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="STDOUT"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="FailoverAppender"/>
- <subsection name="FailoverAppender">
- <p>The FailoverAppender wraps a set of appenders. If the primary Appender fails the secondary appenders will be
- tried in order until one succeeds or there are no more secondaries to try.</p>
- <table>
- <caption align="top">FailoverAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
- may be used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>primary</td>
- <td>String</td>
- <td>The name of the primary Appender to use.</td>
- </tr>
- <tr>
- <td>failovers</td>
- <td>String[]</td>
- <td>The names of the secondary Appenders to use.</td>
- </tr>
-
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>retryIntervalSeconds</td>
- <td>integer</td>
- <td>The number of seconds that should pass before retrying the primary Appender. The default is 60.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead.</td>
- </tr>
- <tr>
- <td>target</td>
- <td>String</td>
- <td>Either "SYSTEM_OUT" or "SYSTEM_ERR". The default is "SYSTEM_ERR".</td>
- </tr>
- </table>
- <p>
- A Failover configuration might look like:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <RollingFile name="RollingFile" fileName="logs/app.log" filePattern="logs/app-%d{MM-dd-yyyy}.log.gz"
- ignoreExceptions="false">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- <TimeBasedTriggeringPolicy />
- </RollingFile>
- <Console name="STDOUT" target="SYSTEM_OUT" ignoreExceptions="false">
- <PatternLayout pattern="%m%n"/>
- </Console>
- <Failover name="Failover" primary="RollingFile">
- <Failovers>
- <AppenderRef ref="Console"/>
- </Failovers>
- </Failover>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="Failover"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="FileAppender"/>
- <subsection name="FileAppender">
- <p>The FileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter. The
- FileAppender uses a FileManager (which extends OutputStreamManager) to actually perform the file I/O. While
- FileAppenders from different Configurations cannot be shared, the FileManagers can be if the Manager is
- accessible. For example, two web applications in a servlet container can have their own configuration and
- safely write to the same file if Log4j is in a ClassLoader that is common to both of them.</p>
- <table>
- <caption align="top">FileAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>append</td>
- <td>boolean</td>
- <td>When true - the default, records will be appended to the end of the file. When set to false,
- the file will be cleared before new records are written.</td>
- </tr>
- <tr>
- <td>bufferedIO</td>
- <td>boolean</td>
- <td>When true - the default, records will be written to a buffer and the data will be written to
- disk when the buffer is full or, if immediateFlush is set, when the record is written.
- File locking cannot be used with bufferedIO. Performance tests have shown that using buffered I/O
- significantly improves performance, even if immediateFlush is enabled.</td>
- </tr>
- <tr>
- <td>bufferSize</td>
- <td>int</td>
- <td>When bufferedIO is true, this is the buffer size, the default is 8192 bytes.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
- may be used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>fileName</td>
- <td>String</td>
- <td>The name of the file to write to. If the file, or any of its parent directories, do not exist,
- they will be created.</td>
- </tr>
- <tr>
- <td>immediateFlush</td>
- <td>boolean</td>
- <td><p>When set to true - the default, each write will be followed by a flush.
- This will guarantee the data is written
- to disk but could impact performance.</p>
- <p>Flushing after every write is only useful when using this
- appender with synchronous loggers. Asynchronous loggers and
- appenders will automatically flush at the end of a batch of events,
- even if immediateFlush is set to false. This also guarantees
- the data is written to disk but is more efficient.</p>
- </td>
- </tr>
- <tr>
- <td>layout</td>
- <td>Layout</td>
- <td>The Layout to use to format the LogEvent</td>
- </tr>
- <tr>
- <td>locking</td>
- <td>boolean</td>
- <td>When set to true, I/O operations will occur only while the file lock is held allowing FileAppenders
- in multiple JVMs and potentially multiple hosts to write to the same file simultaneously. This
- will significantly impact performance so should be used carefully. Furthermore, on many systems
- the file lock is "advisory" meaning that other applications can perform operations on the file
- without acquiring a lock. The default value is false.</td>
- </tr>
-
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- </table>
- <p>
- Here is a sample File configuration:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <File name="MyFile" fileName="logs/app.log">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- </File>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="MyFile"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="FlumeAppender"/>
- <subsection name="FlumeAppender">
- <p><i>This is an optional component supplied in a separate jar.</i></p>
- <p><a href="http://flume.apache.org/index.html">Apache Flume</a> is a distributed, reliable,
- and available system for efficiently collecting, aggregating, and moving large amounts of log data
- from many different sources to a centralized data store. The FlumeAppender takes LogEvents and sends
- them to a Flume agent as serialized Avro events for consumption.</p>
- <p>
- The Flume Appender supports three modes of operation.
- </p>
- <ol>
- <li>It can act as a remote Flume client which sends Flume events via Avro to a Flume Agent configured
- with an Avro Source.</li>
- <li>It can act as an embedded Flume Agent where Flume events pass directly into Flume for processing.</li>
- <li>It can persist events to a local BerkeleyDB data store and then asynchronously send the events to
- Flume, similar to the embedded Flume Agent but without most of the Flume dependencies.</li>
- </ol>
- <p>
- Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel and then
- control will be immediately returned to the application. All interaction with remote agents will occur
- asynchronously. Setting the "type" attribute to "Embedded" will force the use of the embedded agent. In
- addition, configuring agent properties in the appender configuration will also cause the embedded agent
- to be used.
- </p>
- <table>
- <caption align="top">FlumeAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>agents</td>
- <td>Agent[]</td>
- <td>An array of Agents to which the logging events should be sent. If more than one agent is specified
- the first Agent will be the primary and subsequent Agents will be used in the order specified as
- secondaries should the primary Agent fail. Each Agent definition supplies the Agents host and port.
- The specification of agents and properties are mutually exclusive. If both are configured an
- error will result.</td>
- </tr>
- <tr>
- <td>agentRetries</td>
- <td>integer</td>
- <td>The number of times the agent should be retried before failing to a secondary. This parameter is
- ignored when type="persistent" is specified (agents are tried once before failing to the next).</td>
- </tr>
- <tr>
- <td>batchSize</td>
- <td>integer</td>
- <td>Specifies the number of events that should be sent as a batch. The default is 1. <i>This
- parameter only applies to the Flume Appender.</i></td>
- </tr>
- <tr>
- <td>compress</td>
- <td>boolean</td>
- <td>When set to true the message body will be compressed using gzip</td>
- </tr>
- <tr>
- <td>connectTimeoutMillis</td>
- <td>integer</td>
- <td>The number of milliseconds Flume will wait before timing out the connection.</td>
- </tr>
- <tr>
- <td>dataDir</td>
- <td>String</td>
- <td>Directory where the Flume write ahead log should be written. Valid only when embedded is set
- to true and Agent elements are used instead of Property elements.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
- may be used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>eventPrefix</td>
- <td>String</td>
- <td>The character string to prepend to each event attribute in order to distinguish it from MDC attributes.
- The default is an empty string.</td>
- </tr>
- <tr>
- <td>flumeEventFactory</td>
- <td>FlumeEventFactory</td>
- <td>Factory that generates the Flume events from Log4j events. The default factory is the
- FlumeAvroAppender itself.</td>
- </tr>
- <tr>
- <td>layout</td>
- <td>Layout</td>
- <td>The Layout to use to format the LogEvent. If no layout is specified RFC5424Layout will be used.</td>
- </tr>
- <tr>
- <td>lockTimeoutRetries</td>
- <td>integer</td>
- <td>The number of times to retry if a LockConflictException occurs while writing to Berkeley DB. The
- default is 5.</td>
- </tr>
- <tr>
- <td>maxDelayMillis</td>
- <td>integer</td>
- <td>The maximum number of milliseconds to wait for batchSize events before publishing the batch.</td>
- </tr>
- <tr>
- <td>mdcExcludes</td>
- <td>String</td>
- <td>A comma separated list of mdc keys that should be excluded from the FlumeEvent. This is mutually
- exclusive with the mdcIncludes attribute.</td>
- </tr>
- <tr>
- <td>mdcIncludes</td>
- <td>String</td>
- <td>A comma separated list of mdc keys that should be included in the FlumeEvent. Any keys in the MDC
- not found in the list will be excluded. This option is mutually exclusive with the mdcExcludes
- attribute.</td>
- </tr>
- <tr>
- <td>mdcRequired</td>
- <td>String</td>
- <td>A comma separated list of mdc keys that must be present in the MDC. If a key is not present a
- LoggingException will be thrown.</td>
- </tr>
- <tr>
- <td>mdcPrefix</td>
- <td>String</td>
- <td>A string that should be prepended to each MDC key in order to distinguish it from event attributes.
- The default string is "mdc:".</td>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>properties</td>
- <td>Property[]</td>
- <td><p>One or more Property elements that are used to configure the Flume Agent. The properties must be
- configured without the agent name (the appender name is used for this) and no sources can be
- configured. Interceptors can be specified for the source using "sources.log4j-source.interceptors".
- All other Flume configuration properties are allowed. Specifying both Agent and Property
- elements will result in an error.</p>
- <p>When used to configure in Persistent mode the valid properties are:</p>
- <ol>
- <li>"keyProvider" to specify the name of the plugin to provide the secret key for encryption.</li>
- </ol>
- </td>
- </tr>
- <tr>
- <td>requestTimeoutMillis</td>
- <td>integer</td>
- <td>The number of milliseconds Flume will wait before timing out the request.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- <tr>
- <td>type</td>
- <td>enumeration</td>
- <td>One of "Avro", "Embedded", or "Persistent" to indicate which variation of the Appender is desired.</td>
- </tr>
- </table>
- <p>
- A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
- compresses the body, and formats the body using the RFC5424Layout:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true">
- <Agent host="192.168.10.101" port="8800"/>
- <Agent host="192.168.10.102" port="8800"/>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- </Flume>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="eventLogger"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- <p>
- A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
- compresses the body, formats the body using the RFC5424Layout, and persists encrypted events to disk:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true" type="persistent" dataDir="./logData">
- <Agent host="192.168.10.101" port="8800"/>
- <Agent host="192.168.10.102" port="8800"/>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- <Property name="keyProvider">MySecretProvider</Property>
- </Flume>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="eventLogger"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- <p>
- A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
- compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume
- Agent.
- </p>
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true" type="Embedded">
- <Agent host="192.168.10.101" port="8800"/>
- <Agent host="192.168.10.102" port="8800"/>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- </Flume>
- <Console name="STDOUT">
- <PatternLayout pattern="%d [%p] %c %m%n"/>
- </Console>
- </Appenders>
- <Loggers>
- <Logger name="EventLogger" level="info">
- <AppenderRef ref="eventLogger"/>
- </Logger>
- <Root level="warn">
- <AppenderRef ref="STDOUT"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- <p>
- A sample FlumeAppender configuration that is configured with a primary and a secondary agent using
- Flume configuration properties, compresses the body, formats the body using RFC5424Layout and passes the
- events to an embedded Flume Agent.
- </p>
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="error" name="MyApp" packages="">
- <Appenders>
- <Flume name="eventLogger" compress="true" type="Embedded">
- <Property name="channels">file</Property>
- <Property name="channels.file.type">file</Property>
- <Property name="channels.file.checkpointDir">target/file-channel/checkpoint</Property>
- <Property name="channels.file.dataDirs">target/file-channel/data</Property>
- <Property name="sinks">agent1 agent2</Property>
- <Property name="sinks.agent1.channel">file</Property>
- <Property name="sinks.agent1.type">avro</Property>
- <Property name="sinks.agent1.hostname">192.168.10.101</Property>
- <Property name="sinks.agent1.port">8800</Property>
- <Property name="sinks.agent1.batch-size">100</Property>
- <Property name="sinks.agent2.channel">file</Property>
- <Property name="sinks.agent2.type">avro</Property>
- <Property name="sinks.agent2.hostname">192.168.10.102</Property>
- <Property name="sinks.agent2.port">8800</Property>
- <Property name="sinks.agent2.batch-size">100</Property>
- <Property name="sinkgroups">group1</Property>
- <Property name="sinkgroups.group1.sinks">agent1 agent2</Property>
- <Property name="sinkgroups.group1.processor.type">failover</Property>
- <Property name="sinkgroups.group1.processor.priority.agent1">10</Property>
- <Property name="sinkgroups.group1.processor.priority.agent2">5</Property>
- <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
- </Flume>
- <Console name="STDOUT">
- <PatternLayout pattern="%d [%p] %c %m%n"/>
- </Console>
- </Appenders>
- <Loggers>
- <Logger name="EventLogger" level="info">
- <AppenderRef ref="eventLogger"/>
- </Logger>
- <Root level="warn">
- <AppenderRef ref="STDOUT"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="JDBCAppender"/>
- <subsection name="JDBCAppender">
- <p>The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured
- to obtain JDBC connections using a JNDI <code>DataSource</code> or a custom factory method. Whichever
- approach you take, it <strong><em>must</em></strong> be backed by a connection pool. Otherwise, logging
- performance will suffer greatly.</p>
- <table>
- <caption align="top">JDBCAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td><em>Required.</em> The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter may be
- used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>bufferSize</td>
- <td>int</td>
- <td>If an integer greater than 0, this causes the appender to buffer log events and flush whenever the
- buffer reaches this size.</td>
- </tr>
- <tr>
- <td>connectionSource</td>
- <td>ConnectionSource</td>
- <td><em>Required.</em> The connections source from which database connections should be retrieved.</td>
- </tr>
- <tr>
- <td>tableName</td>
- <td>String</td>
- <td><em>Required.</em> The name of the database table to insert log events into.</td>
- </tr>
- <tr>
- <td>columnConfigs</td>
- <td>ColumnConfig[]</td>
- <td><em>Required.</em> Information about the columns that log event data should be inserted into and how
- to insert that data. This is represented with multiple <code><Column></code> elements.</td>
- </tr>
- </table>
- <p>When configuring the JDBCAppender, you must specify a <code>ConnectionSource</code> implementation from
- which the Appender gets JDBC connections. You must use exactly one of the <code><DataSource></code>
- or <code><ConnectionFactory></code> nested elements.</p>
- <table>
- <caption align="top">DataSource Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>jndiName</td>
- <td>String</td>
- <td><em>Required.</em> The full, prefixed JNDI name that the <code>javax.sql.DataSource</code> is bound
- to, such as <code>java:/comp/env/jdbc/LoggingDatabase</code>. The <code>DataSource</code> must be backed
- by a connection pool; otherwise, logging will be very slow.</td>
- </tr>
- </table>
- <table>
- <caption align="top">ConnectionFactory Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>class</td>
- <td>Class</td>
- <td><em>Required.</em> The fully qualified name of a class containing a static factory method for
- obtaining JDBC connections.</td>
- </tr>
- <tr>
- <td>method</td>
- <td>Method</td>
- <td><em>Required.</em> The name of a static factory method for obtaining JDBC connections. This method
- must have no parameters and its return type must be either <code>java.sql.Connection</code> or
- <code>DataSource</code>. If the method returns <code>Connection</code>s, it must obtain them from a
- connection pool (and they will be returned to the pool when Log4j is done with them); otherwise, logging
- will be very slow. If the method returns a <code>DataSource</code>, the <code>DataSource</code> will
- only be retrieved once, and it must be backed by a connection pool for the same reasons.</td>
- </tr>
- </table>
- <p>When configuring the JDBCAppender, use the nested <code><Column></code> elements to specify which
- columns in the table should be written to and how to write to them. The JDBCAppender uses this information
- to formulate a <code>PreparedStatement</code> to insert records without SQL injection vulnerability.</p>
- <table>
- <caption align="top">Column Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td><em>Required.</em> The name of the database column.</td>
- </tr>
- <tr>
- <td>pattern</td>
- <td>String</td>
- <td>Use this attribute to insert a value or values from the log event in this column using a
- <code>PatternLayout</code> pattern. Simply specify any legal pattern in this attribute. Either this
- attribute, <code>literal</code>, or <code>isEventTimestamp="true"</code> must be specified, but not more
- than one of these.</td>
- </tr>
- <tr>
- <td>literal</td>
- <td>String</td>
- <td>Use this attribute to insert a literal value in this column. The value will be included directly in
- the insert SQL, without any quoting (which means that if you want this to be a string, your value should
- contain single quotes around it like this: <code>literal="'Literal String'"</code>). This is especially
- useful for databases that don't support identity columns. For example, if you are using Oracle you could
- specify <code>literal="NAME_OF_YOUR_SEQUENCE.NEXTVAL"</code> to insert a unique ID in an ID column.
- Either this attribute, <code>pattern</code>, or <code>isEventTimestamp="true"</code> must be specified,
- but not more than one of these.</td>
- </tr>
- <tr>
- <td>isEventTimestamp</td>
- <td>boolean</td>
- <td>Use this attribute to insert the event timestamp in this column, which should be a SQL datetime. The
- value will be inserted as a <code>java.sql.Types.TIMESTAMP</code>. Either this attribute (equal to
- <code>true</code>), <code>pattern</code>, or <code>isEventTimestamp</code> must be specified, but not
- more than one of these.</td>
- </tr>
- <tr>
- <td>isUnicode</td>
- <td>boolean</td>
- <td>This attribute is ignored unless <code>pattern</code> is specified. If <code>true</code> or omitted
- (default), the value will be inserted as unicode (<code>setNString</code> or <code>setNClob</code>).
- Otherwise, the value will be inserted non-unicode (<code>setString</code> or <code>setClob</code>).</td>
- </tr>
- <tr>
- <td>isClob</td>
- <td>boolean</td>
- <td>This attribute is ignored unless <code>pattern</code> is specified. Use this attribute to indicate
- that the column stores Character Large Objects (CLOBs). If <code>true</code>, the value will be inserted
- as a CLOB (<code>setClob</code> or <code>setNClob</code>). If <code>false</code> or omitted (default),
- the value will be inserted as a VARCHAR or NVARCHAR (<code>setString</code> or <code>setNString</code>).
- </td>
- </tr>
- </table>
- <p>
- Here are a couple sample configurations for the JDBCAppender, as well as a sample factory implementation
- that uses Commons Pooling and Commons DBCP to pool database connections:
- </p>
-
- <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="error">
- <Appenders>
- <JDBC name="databaseAppender" tableName="dbo.application_log">
- <DataSource jndiName="java:/comp/env/jdbc/LoggingDataSource" />
- <Column name="eventDate" isEventTimestamp="true" />
- <Column name="level" pattern="%level" />
- <Column name="logger" pattern="%logger" />
- <Column name="message" pattern="%message" />
- <Column name="exception" pattern="%ex{full}" />
- </JDBC>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
-
- <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="error">
- <Appenders>
- <JDBC name="databaseAppender" tableName="LOGGING.APPLICATION_LOG">
- <ConnectionFactory class="net.example.db.ConnectionFactory" method="getDatabaseConnection" />
- <Column name="EVENT_ID" literal="LOGGING.APPLICATION_LOG_SEQUENCE.NEXTVAL" />
- <Column name="EVENT_DATE" isEventTimestamp="true" />
- <Column name="LEVEL" pattern="%level" />
- <Column name="LOGGER" pattern="%logger" />
- <Column name="MESSAGE" pattern="%message" />
- <Column name="THROWABLE" pattern="%ex{full}" />
- </JDBC>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- <pre class="prettyprint linenums lang-java"><![CDATA[package net.example.db;
-
-import java.sql.Connection;
-import java.sql.SQLException;
-import java.util.Properties;
-
-import javax.sql.DataSource;
-
-import org.apache.commons.dbcp.DriverManagerConnectionFactory;
-import org.apache.commons.dbcp.PoolableConnection;
-import org.apache.commons.dbcp.PoolableConnectionFactory;
-import org.apache.commons.dbcp.PoolingDataSource;
-import org.apache.commons.pool.impl.GenericObjectPool;
-
-public class ConnectionFactory {
- private static interface Singleton {
- final ConnectionFactory INSTANCE = new ConnectionFactory();
- }
-
- private final DataSource dataSource;
-
- private ConnectionFactory() {
- Properties properties = new Properties();
- properties.setProperty("user", "logging");
- properties.setProperty("password", "abc123"); // or get properties from some configuration file
-
- GenericObjectPool<PoolableConnection> pool = new GenericObjectPool<PoolableConnection>();
- DriverManagerConnectionFactory connectionFactory = new DriverManagerConnectionFactory(
- "jdbc:mysql://example.org:3306/exampleDb", properties
- );
- new PoolableConnectionFactory(
- connectionFactory, pool, null, "SELECT 1", 3, false, false, Connection.TRANSACTION_READ_COMMITTED
- );
-
- this.dataSource = new PoolingDataSource(pool);
- }
-
- public static Connection getDatabaseConnection() throws SQLException {
- return Singleton.INSTANCE.dataSource.getConnection();
- }
-}]]></pre>
- </subsection>
- <a name="JMSAppender"/>
- <!-- cool URLs don't change, so here are some old anchors -->
- <a name="JMSQueueAppender"/>
- <a name="JMSTopicAppender"/>
- <subsection name="JMSAppender">
- <p>The JMSAppender sends the formatted log event to a JMS Destination.</p>
- <p>
- Note that in Log4j 2.0, this appender was split into a JMSQueueAppender and a JMSTopicAppender. Starting
- in Log4j 2.1, these appenders were combined into the JMSAppender which makes no distinction between queues
- and topics. However, configurations written for 2.0 which use the <code><JMSQueue/></code> or
- <code><JMSTopic/></code> elements will continue to work with the new <code><JMS/></code>
- configuration element.
- </p>
- <table>
- <caption align="top">JMSAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>factoryBindingName</td>
- <td>String</td>
- <td>The name to locate in the Context that provides the
- <a class="javadoc" href="http://download.oracle.com/javaee/5/api/javax/jms/ConnectionFactory.html">ConnectionFactory</a>.
- This can be any subinterface of <code>ConnectionFactory</code> as well. This attribute is required.
- </td>
- </tr>
- <tr>
- <td>factoryName</td>
- <td>String</td>
- <td>The fully qualified class name that should be used to define the Initial Context Factory as defined in
- <a class="javadoc" href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#INITIAL_CONTEXT_FACTORY">INITIAL_CONTEXT_FACTORY</a>.
- If no value is provided the
- default InitialContextFactory will be used. If a factoryName is specified without a providerURL
- a warning message will be logged as this is likely to cause problems.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
- may be used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>layout</td>
- <td>Layout</td>
- <td>
- The Layout to use to format the LogEvent. If you do not specify a layout,
- this appender will use a <a href="layouts.html#SerializedLayout">SerializedLayout</a>.
- </td>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender. Required.</td>
- </tr>
- <tr>
- <td>password</td>
- <td>String</td>
- <td>The password to use to create the JMS connection.</td>
- </tr>
- <tr>
- <td>providerURL</td>
- <td>String</td>
- <td>The URL of the provider to use as defined by
- <a class="javadoc" href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#PROVIDER_URL">PROVIDER_URL</a>.
- If this value is null the default system provider will be used.</td>
- </tr>
- <tr>
- <td>destinationBindingName</td>
- <td>String</td>
- <td>
- The name to use to locate the
- <a class="javadoc" href="http://download.oracle.com/javaee/5/api/javax/jms/Destination.html">Destination</a>.
- This can be a <code>Queue</code> or <code>Topic</code>, and as such, the attribute names
- <code>queueBindingName</code> and <code>topicBindingName</code> are aliases to maintain compatibility
- with the Log4j 2.0 JMS appenders.
- </td>
- </tr>
- <tr>
- <td>securityPrincipalName</td>
- <td>String</td>
- <td>The name of the identity of the Principal as specified by
- <a class="javadoc" href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#SECURITY_PRINCIPAL">SECURITY_PRINCIPAL</a>.
- If a securityPrincipalName is specified without securityCredentials a warning message will be
- logged as this is likely to cause problems.</td>
- </tr>
- <tr>
- <td>securityCredentials</td>
- <td>String</td>
- <td>The security credentials for the principal as specified by
- <a class="javadoc" href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#SECURITY_CREDENTIALS">SECURITY_CREDENTIALS</a>.
- </td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- <tr>
- <td>urlPkgPrefixes</td>
- <td>String</td>
- <td>A colon-separated list of package prefixes for the class name of the factory class that will create
- a URL context factory as defined by
- <a class="javadoc" href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#URL_PKG_PREFIXES">URL_PKG_PREFIXES</a>.
- </td>
- </tr>
- <tr>
- <td>userName</td>
- <td>String</td>
- <td>The user id used to create the JMS connection.</td>
- </tr>
- </table>
- <p>
- Here is a sample JMSAppender configuration:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp">
- <Appenders>
- <JMS name="jmsQueue" destinationBindingName="MyQueue"
- factoryBindingName="MyQueueConnectionFactory"/>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="jmsQueue"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="JPAAppender"/>
- <subsection name="JPAAppender">
- <p>The JPAAppender writes log events to a relational database table using the Java Persistence API 2.1.
- It requires the API and a provider implementation be on the classpath. It also requires a decorated entity
- configured to persist to the table desired. The entity should either extend
- <code>org.apache.logging.log4j.core.appender.db.jpa.BasicLogEventEntity</code> (if you mostly want to
- use the default mappings) and provide at least an <code>@Id</code> property, or
- <code>org.apache.logging.log4j.core.appender.db.jpa.AbstractLogEventWrapperEntity</code> (if you want
- to significantly customize the mappings). See the Javadoc for these two classes for more information. You
- can also consult the source code of these two classes as an example of how to implement the entity.</p>
- <table>
- <caption align="top">JPAAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td><em>Required.</em> The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter may be
- used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>bufferSize</td>
- <td>int</td>
- <td>If an integer greater than 0, this causes the appender to buffer log events and flush whenever the
- buffer reaches this size.</td>
- </tr>
- <tr>
- <td>entityClassName</td>
- <td>String</td>
- <td><em>Required.</em> The fully qualified name of the concrete LogEventWrapperEntity implementation that
- has JPA annotations mapping it to a database table.</td>
- </tr>
- <tr>
- <td>persistenceUnitName</td>
- <td>String</td>
- <td><em>Required.</em> The name of the JPA persistence unit that should be used for persisting log
- events.</td>
- </tr>
- </table>
- <p>
- Here is a sample configuration for the JPAAppender. The first XML sample is the Log4j configuration file,
- the second is the <code>persistence.xml</code> file. EclipseLink is assumed here, but any JPA 2.1 or higher
- provider will do. You should <em>always</em> create a <em>separate</em> persistence unit for logging, for
- two reasons. First, <code><shared-cache-mode></code> <em>must</em> be set to "NONE," which is usually
- not desired in normal JPA usage. Also, for performance reasons the logging entity should be isolated in its
- own persistence unit away from all other entities and you should use a non-JTA data source. Note that your
- persistence unit <em>must</em> also contain <code><class></code> elements for all of the
- <code>org.apache.logging.log4j.core.appender.db.jpa.converter</code> converter classes.
- </p>
-
- <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="error">
- <Appenders>
- <JPA name="databaseAppender" persistenceUnitName="loggingPersistenceUnit"
- entityClassName="com.example.logging.JpaLogEntity" />
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
-
- <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
- http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
- version="2.1">
-
- <persistence-unit name="loggingPersistenceUnit" transaction-type="RESOURCE_LOCAL">
- <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapJsonAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackJsonAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.MarkerAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.MessageAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.StackTraceElementAttributeConverter</class>
- <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ThrowableAttributeConverter</class>
- <class>com.example.logging.JpaLogEntity</class>
- <non-jta-data-source>jdbc/LoggingDataSource</non-jta-data-source>
- <shared-cache-mode>NONE</shared-cache-mode>
- </persistence-unit>
-
-</persistence>]]></pre>
-
- <pre class="prettyprint linenums lang-java"><![CDATA[package com.example.logging;
-...
-@Entity
-@Table(name="application_log", schema="dbo")
-public class JpaLogEntity extends BasicLogEventEntity {
- private static final long serialVersionUID = 1L;
- private long id = 0L;
-
- public TestEntity() {
- super(null);
- }
- public TestEntity(LogEvent wrappedEvent) {
- super(wrappedEvent);
- }
-
- @Id
- @GeneratedValue(strategy = GenerationType.IDENTITY)
- @Column(name = "id")
- public long getId() {
- return this.id;
- }
-
- public void setId(long id) {
- this.id = id;
- }
-
- // If you want to override the mapping of any properties mapped in BasicLogEventEntity,
- // just override the getters and re-specify the annotations.
-}]]></pre>
-
- <pre class="prettyprint linenums lang-java"><![CDATA[package com.example.logging;
-...
-@Entity
-@Table(name="application_log", schema="dbo")
-public class JpaLogEntity extends AbstractLogEventWrapperEntity {
- private static final long serialVersionUID = 1L;
- private long id = 0L;
-
- public TestEntity() {
- super(null);
- }
- public TestEntity(LogEvent wrappedEvent) {
- super(wrappedEvent);
- }
-
- @Id
- @GeneratedValue(strategy = GenerationType.IDENTITY)
- @Column(name = "logEventId")
- public long getId() {
- return this.id;
- }
-
- public void setId(long id) {
- this.id = id;
- }
-
- @Override
- @Enumerated(EnumType.STRING)
- @Column(name = "level")
- public Level getLevel() {
- return this.getWrappedEvent().getLevel();
- }
-
- @Override
- @Column(name = "logger")
- public String getLoggerName() {
- return this.getWrappedEvent().getLoggerName();
- }
-
- @Override
- @Column(name = "message")
- @Convert(converter = MyMessageConverter.class)
- public Message getMessage() {
- return this.getWrappedEvent().getMessage();
- }
- ...
-}]]></pre>
- </subsection>
- <a name="MemoryMappedFileAppender" />
- <subsection name="MemoryMappedFileAppender">
- <p><i>New since 2.1. Be aware that this is a new addition, and although it has been
- tested on several platforms, it does not have as much track record as the other file appenders.</i></p>
- <p>
- The MemoryMappedFileAppender maps a part of the specified file into memory
- and writes log events to this memory, relying on the operating system's
- virtual memory manager to synchronize the changes to the storage device.
- The main benefit of using memory mapped files is I/O performance. Instead of making system
- calls to write to disk, this appender can simply change the program's local memory,
- which is orders of magnitude faster. Also, in most operating systems the memory
- region mapped actually is the kernel's <a href="http://en.wikipedia.org/wiki/Page_cache">page
- cache</a> (file cache), meaning that no copies need to be created in user space.
- (TODO: performance tests that compare performance of this appender to
- RandomAccessFileAppender and FileAppender.)
- </p>
- <p>
- There is some overhead with mapping a file region into memory,
- especially very large regions (half a gigabyte or more).
- The default region size is 32 MB, which should strike a reasonable balance
- between the frequency and the duration of remap operations.
- (TODO: performance test remapping various sizes.)
- </p>
- <p>
- Similar to the FileAppender and the RandomAccessFileAppender,
- MemoryMappedFileAppender uses a MemoryMappedFileManager to actually perform the
- file I/O. While MemoryMappedFileAppender from different Configurations
- cannot be shared, the MemoryMappedFileManagers can be if the Manager is
- accessible. For example, two web applications in a servlet container can have
- their own configuration and safely write to the same file if Log4j
- is in a ClassLoader that is common to both of them.
- </p>
- <table>
- <caption align="top">MemoryMappedFileAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>append</td>
- <td>boolean</td>
- <td>When true - the default, records will be appended to the end
- of the file. When set to false, the file will be cleared before
- new records are written.
- </td>
- </tr>
- <tr>
- <td>fileName</td>
- <td>String</td>
- <td>The name of the file to write to. If the file, or any of its
- parent directories, do not exist, they will be created.
- </td>
- </tr>
- <tr>
- <td>filters</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this
- Appender. More than one Filter may be used by using a CompositeFilter.
- </td>
- </tr>
- <tr>
- <td>immediateFlush</td>
- <td>boolean</td>
- <td>
- <p>When set to true, each write will be followed by a
- call to <a href="http://docs.oracle.com/javase/7/docs/api/java/nio/MappedByteBuffer.html#force()">MappedByteBuffer.force()</a>.
- This will guarantee the data is written to the storage device.
- </p>
- <p>The default for this parameter is <code>false</code>.
- This means that the data is written to the storage device even
- if the Java process crashes, but there may be data loss if the
- operating system crashes.</p>
- <p>Note that manually forcing a sync on every log event loses most
- of the performance benefits of using a memory mapped file.</p>
- <p>Flushing after every write is only useful when using this
- appender with synchronous loggers. Asynchronous loggers and
- appenders will automatically flush at the end of a batch of events,
- even if immediateFlush is set to false. This also guarantees
- the data is written to disk but is more efficient.
- </p>
- </td>
- </tr>
- <tr>
- <td>regionLength</td>
- <td>int</td>
- <td>The length of the mapped region, defaults to 32 MB
- (32 * 1024 * 1024 bytes). This parameter must be a value
- between 256 and 1,073,741,824 (1 GB or 2^30);
- values outside this range will be adjusted to the closest valid
- value.
- Log4j will round the specified value up to the nearest power of two.</td>
- </tr>
- <tr>
- <td>layout</td>
- <td>Layout</td>
- <td>The Layout to use to format the LogEvent</td>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- </table>
- <p>
- Here is a sample MemoryMappedFile configuration:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <MemoryMappedFile name="MyFile" fileName="logs/app.log">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- </MemoryMappedFile>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="MyFile"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="NoSQLAppender"/>
- <subsection name="NoSQLAppender">
- <p>The NoSQLAppender writes log events to a NoSQL database using an internal lightweight provider interface.
- Provider implementations currently exist for MongoDB and Apache CouchDB, and writing a custom provider is
- quite simple.</p>
- <table>
- <caption align="top">NoSQLAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td><em>Required.</em> The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter may be
- used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>bufferSize</td>
- <td>int</td>
- <td>If an integer greater than 0, this causes the appender to buffer log events and flush whenever the
- buffer reaches this size.</td>
- </tr>
- <tr>
- <td>NoSqlProvider</td>
- <td>NoSQLProvider<C extends NoSQLConnection<W, T extends NoSQLObject<W>>></td>
- <td><em>Required.</em> The NoSQL provider that provides connections to the chosen NoSQL database.</td>
- </tr>
- </table>
- <p>You specify which NoSQL provider to use by specifying the appropriate configuration element within the
- <code><NoSql></code> element. The types currently supported are <code><MongoDb></code> and
- <code><CouchDb></code>. To create your own custom provider, read the JavaDoc for the
- <code>NoSQLProvider</code>, <code>NoSQLConnection</code>, and <code>NoSQLObject</code> classes and the
- documentation about creating Log4j plugins. We recommend you review the source code for the MongoDB and
- CouchDB providers as a guide for creating your own provider.</p>
- <table>
- <caption align="top">MongoDB Provider Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>collectionName</td>
- <td>String</td>
- <td><em>Required.</em> The name of the MongoDB collection to insert the events into.</td>
- </tr>
- <tr>
- <td>writeConcernConstant</td>
- <td>Field</td>
- <td>By default, the MongoDB provider inserts records with the instructions
- <code>com.mongodb.WriteConcern.ACKNOWLEDGED</code>. Use this optional attribute to specify the name of
- a constant other than <code>ACKNOWLEDGED</code>.</td>
- </tr>
- <tr>
- <td>writeConcernConstantClass</td>
- <td>Class</td>
- <td>If you specify <code>writeConcernConstant</code>, you can use this attribute to specify a class other
- than <code>com.mongodb.WriteConcern</code> to find the constant on (to create your own custom
- instructions).</td>
- </tr>
- <tr>
- <td>factoryClassName</td>
- <td>Class</td>
- <td>To provide a connection to the MongoDB database, you can use this attribute and
- <code>factoryMethodName</code> to specify a class and static method to get the connection from. The
- method must return a <code>com.mongodb.DB</code> or a <code>com.mongodb.MongoClient</code>. If the
- <code>DB</code> is not authenticated, you must also specify a <code>username</code> and
- <code>password</code>. If you use the factory method for providing a connection, you must not specify
- the <code>databaseName</code>, <code>server</code>, or <code>port</code> attributes.</td>
- </tr>
- <tr>
- <td>factoryMethodName</td>
- <td>Method</td>
- <td>See the documentation for attribute <code>factoryClassName</code>.</td>
- </tr>
- <tr>
- <td>databaseName</td>
- <td>String</td>
- <td>If you do not specify a <code>factoryClassName</code> and <code>factoryMethodName</code> for providing
- a MongoDB connection, you must specify a MongoDB database name using this attribute. You must also
- specify a <code>username</code> and <code>password</code>. You can optionally also specify a
- <code>server</code> (defaults to localhost), and a <code>port</code> (defaults to the default MongoDB
- port).</td>
- </tr>
- <tr>
- <td>server</td>
- <td>String</td>
- <td>See the documentation for attribute <code>databaseName</code>.</td>
- </tr>
- <tr>
- <td>port</td>
- <td>int</td>
- <td>See the documentation for attribute <code>databaseName</code>.</td>
- </tr>
- <tr>
- <td>username</td>
- <td>String</td>
- <td>See the documentation for attributes <code>databaseName</code> and <code>factoryClassName</code>.</td>
- </tr>
- <tr>
- <td>password</td>
- <td>String</td>
- <td>See the documentation for attributes <code>databaseName</code> and <code>factoryClassName</code>.</td>
- </tr>
- </table>
- <table>
- <caption align="top">CouchDB Provider Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>factoryClassName</td>
- <td>Class</td>
- <td>To provide a connection to the CouchDB database, you can use this attribute and
- <code>factoryMethodName</code> to specify a class and static method to get the connection from. The
- method must return a <code>org.lightcouch.CouchDbClient</code> or a
- <code>org.lightcouch.CouchDbProperties</code>. If you use the factory method for providing a connection,
- you must not specify the <code>databaseName</code>, <code>protocol</code>, <code>server</code>,
- <code>port</code>, <code>username</code>, or <code>password</code> attributes.</td>
- </tr>
- <tr>
- <td>factoryMethodName</td>
- <td>Method</td>
- <td>See the documentation for attribute <code>factoryClassName</code>.</td>
- </tr>
- <tr>
- <td>databaseName</td>
- <td>String</td>
- <td>If you do not specify a <code>factoryClassName</code> and <code>factoryMethodName</code> for providing
- a CouchDB connection, you must specify a CouchDB database name using this attribute. You must also
- specify a <code>username</code> and <code>password</code>. You can optionally also specify a
- <code>protocol</code> (defaults to http), <code>server</code> (defaults to localhost), and a
- <code>port</code> (defaults to 80 for http and 443 for https).</td>
- </tr>
- <tr>
- <td>protocol</td>
- <td>String</td>
- <td>Must either be "http" or "https." See the documentation for attribute <code>databaseName</code>.</td>
- </tr>
- <tr>
- <td>server</td>
- <td>String</td>
- <td>See the documentation for attribute <code>databaseName</code>.</td>
- </tr>
- <tr>
- <td>port</td>
- <td>int</td>
- <td>See the documentation for attribute <code>databaseName</code>.</td>
- </tr>
- <tr>
- <td>username</td>
- <td>String</td>
- <td>See the documentation for attributes <code>databaseName</code>.</td>
- </tr>
- <tr>
- <td>password</td>
- <td>String</td>
- <td>See the documentation for attributes <code>databaseName</code>.</td>
- </tr>
- </table>
- <p>
- Here are a few sample configurations for the NoSQLAppender:
- </p>
-
- <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="error">
- <Appenders>
- <NoSql name="databaseAppender">
- <MongoDb databaseName="applicationDb" collectionName="applicationLog" server="mongo.example.org"
- username="loggingUser" password="abc123" />
- </NoSql>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
-
- <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="error">
- <Appenders>
- <NoSql name="databaseAppender">
- <MongoDb collectionName="applicationLog" factoryClassName="org.example.db.ConnectionFactory"
- factoryMethodName="getNewMongoClient" />
- </NoSql>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
-
- <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="error">
- <Appenders>
- <NoSql name="databaseAppender">
- <CouchDb databaseName="applicationDb" protocol="https" server="couch.example.org"
- username="loggingUser" password="abc123" />
- </NoSql>
- </Appenders>
- <Loggers>
- <Root level="warn">
- <AppenderRef ref="databaseAppender"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- <p>
- The following example demonstrates how log events are persisted in NoSQL databases if represented in a JSON
- format:
- </p>
- <pre class="prettyprint lang-javascript"><![CDATA[{
- "level": "WARN",
- "loggerName": "com.example.application.MyClass",
- "message": "Something happened that you might want to know about.",
- "source": {
- "className": "com.example.application.MyClass",
- "methodName": "exampleMethod",
- "fileName": "MyClass.java",
- "lineNumber": 81
- },
- "marker": {
- "name": "SomeMarker",
- "parent" {
- "name": "SomeParentMarker"
- }
- },
- "threadName": "Thread-1",
- "millis": 1368844166761,
- "date": "2013-05-18T02:29:26.761Z",
- "thrown": {
- "type": "java.sql.SQLException",
- "message": "Could not insert record. Connection lost.",
- "stackTrace": [
- { "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1049 },
- { "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 },
- { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 },
- { "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 }
- ],
- "cause": {
- "type": "java.io.IOException",
- "message": "Connection lost.",
- "stackTrace": [
- { "className": "java.nio.channels.SocketChannel", "methodName": "write", "fileName": null, "lineNumber": -1 },
- { "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1032 },
- { "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 },
- { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 },
- { "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 }
- ]
- }
- },
- "contextMap": {
- "ID": "86c3a497-4e67-4eed-9d6a-2e5797324d7b",
- "username": "JohnDoe"
- },
- "contextStack": [
- "topItem",
- "anotherItem",
- "bottomItem"
- ]
-}]]></pre>
- </subsection>
- <a name="OutputStreamAppender"/>
- <subsection name="OutputStreamAppender">
- <p>
- The OutputStreamAppender provides the base for many of the other Appenders such as the File and Socket
- appenders that write the event to an Output Stream. It cannot be directly configured. Support for
- immediateFlush and buffering is provided by the OutputStreamAppender. The OutputStreamAppender uses an
- OutputStreamManager to handle the actual I/O, allowing the stream to be shared by Appenders in multiple
- configurations.
- </p>
- </subsection>
- <a name="RandomAccessFileAppender" />
- <subsection name="RandomAccessFileAppender">
- <p><i>As of beta-9, the name of this appender has been changed from FastFile to
- RandomAccessFile. Configurations using the <code>FastFile</code> element
- no longer work and should be modified to use the <code>RandomAccessFile</code> element.</i></p>
- <p>
- The RandomAccessFileAppender is similar to the standard
- <a href="#FileAppender">FileAppender</a>
- except it is always buffered (this cannot be switched off)
- and internally it uses a
- <tt>ByteBuffer + RandomAccessFile</tt>
- instead of a
- <tt>BufferedOutputStream</tt>.
- We saw a 20-200% performance improvement compared to
- FileAppender with "bufferedIO=true" in our
- <a href="async.html#RandomAccessFileAppenderPerformance">measurements</a>.
- Similar to the FileAppender,
- RandomAccessFileAppender uses a RandomAccessFileManager to actually perform the
- file I/O. While RandomAccessFileAppender
- from different Configurations
- cannot be shared, the RandomAccessFileManagers can be if the Manager is
- accessible. For example, two web applications in a
- servlet container can have
- their own configuration and safely
- write to the same file if Log4j
- is in a ClassLoader that is common to
- both of them.
- </p>
- <table>
- <caption align="top">RandomAccessFileAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>append</td>
- <td>boolean</td>
- <td>When true - the default, records will be appended to the end
- of the file. When set to false,
- the file will be cleared before
- new records are written.
- </td>
- </tr>
- <tr>
- <td>fileName</td>
- <td>String</td>
- <td>The name of the file to write to. If the file, or any of its
- parent directories, do not exist,
- they will be created.
- </td>
- </tr>
- <tr>
- <td>filters</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this
- Appender. More than one Filter
- may be used by using a CompositeFilter.
- </td>
- </tr>
- <tr>
- <td>immediateFlush</td>
- <td>boolean</td>
- <td>
- <p>
- When set to true - the default, each write will be followed by a flush.
- This will guarantee the data is written
- to disk but could impact performance.
- </p>
- <p>
- Flushing after every write is only useful when using this
- appender with synchronous loggers. Asynchronous loggers and
- appenders will automatically flush at the end of a batch of events,
- even if immediateFlush is set to false. This also guarantees
- the data is written to disk but is more efficient.
- </p>
- </td>
- </tr>
- <tr>
- <td>bufferSize</td>
- <td>int</td>
- <td>The buffer size, defaults to 262,144 bytes (256 * 1024).</td>
- </tr>
- <tr>
- <td>layout</td>
- <td>Layout</td>
- <td>The Layout to use to format the LogEvent</td>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- </table>
- <p>
- Here is a sample RandomAccessFile configuration:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <RandomAccessFile name="MyFile" fileName="logs/app.log">
- <PatternLayout>
- <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
- </PatternLayout>
- </RandomAccessFile>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="MyFile"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- </subsection>
- <a name="RewriteAppender"/>
- <subsection name="RewriteAppender">
- <p>
- The RewriteAppender allows the LogEvent to manipulated before it is processed by another Appender. This
- can be used to mask sensitive information such as passwords or to inject information into each event.
- The RewriteAppender must be configured with a <a href="RewritePolicy">RewritePolicy</a>. The
- RewriteAppender should be configured after any Appenders it references to allow it to shut down properly.
- </p>
- <table>
- <caption align="top">RewriteAppender Parameters</caption>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>AppenderRef</td>
- <td>String</td>
- <td>The name of the Appenders to call after the LogEvent has been manipulated. Multiple AppenderRef
- elements can be configured.</td>
- </tr>
- <tr>
- <td>filter</td>
- <td>Filter</td>
- <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
- may be used by using a CompositeFilter.</td>
- </tr>
- <tr>
- <td>name</td>
- <td>String</td>
- <td>The name of the Appender.</td>
- </tr>
- <tr>
- <td>rewritePolicy</td>
- <td>RewritePolicy</td>
- <td>The RewritePolicy that will manipulate the LogEvent.</td>
- </tr>
- <tr>
- <td>ignoreExceptions</td>
- <td>boolean</td>
- <td>The default is <code>true</code>, causing exceptions encountered while appending events to be
- internally logged and then ignored. When set to <code>false</code> exceptions will be propagated to the
- caller, instead. You must set this to <code>false</code> when wrapping this Appender in a
- <a href="#FailoverAppender">FailoverAppender</a>.</td>
- </tr>
- </table>
- <h4>RewritePolicy</h4>
- <p>
- RewritePolicy is an interface that allows implementations to inspect and possibly modify LogEvents
- before they are passed to Appender. RewritePolicy declares a single method named rewrite that must
- be implemented. The method is passed the LogEvent and can return the same event or create a new one.
- </p>
- <h5>MapRewritePolicy</h5>
- <p>
- MapRewritePolicy will evaluate LogEvents that contain a MapMessage and will add or update
- elements of the Map.
- </p>
- <table>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>mode</td>
- <td>String</td>
- <td>"Add" or "Update"</td>
- </tr>
- <tr>
- <td>keyValuePair</td>
- <td>KeyValuePair[]</td>
- <td>An array of keys and their values.</td>
- </tr>
- </table>
- <p>
- The following configuration shows a RewriteAppender configured to add a product key and its value
- to the MapMessage.:
- </p>
-
- <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<Configuration status="warn" name="MyApp" packages="">
- <Appenders>
- <Console name="STDOUT" target="SYSTEM_OUT">
- <PatternLayout pattern="%m%n"/>
- </Console>
- <Rewrite name="rewrite">
- <AppenderRef ref="STDOUT"/>
- <MapRewritePolicy mode="Add">
- <KeyValuePair key="product" value="TestProduct"/>
- </MapRewritePolicy>
- </Rewrite>
- </Appenders>
- <Loggers>
- <Root level="error">
- <AppenderRef ref="Rewrite"/>
- </Root>
- </Loggers>
-</Configuration>]]></pre>
- <h5>PropertiesRewritePolicy</h5>
- <p>
- PropertiesRewritePolicy will add properties configured on the policy to the ThreadContext Map
- being logged. The properties will not be added to the actual ThreadContext Map. The property
- values may contain variables that will be evaluated when the configuration is processed as
- well as when the event is logged.
- </p>
- <table>
- <tr>
- <th>Parameter Name</th>
- <th>Type</th>
- <th>Description</th>
- </tr>
- <tr>
- <td>properties</td>
- <td>Prop
<TRUNCATED>
[2/2] logging-log4j2 git commit: [LOG4J2-1107] New Appender for
Apache Kafka.
Posted by gg...@apache.org.
[LOG4J2-1107] New Appender for Apache Kafka.
Project: http://git-wip-us.apache.org/repos/asf/logging-log4j2/repo
Commit: http://git-wip-us.apache.org/repos/asf/logging-log4j2/commit/53320c02
Tree: http://git-wip-us.apache.org/repos/asf/logging-log4j2/tree/53320c02
Diff: http://git-wip-us.apache.org/repos/asf/logging-log4j2/diff/53320c02
Branch: refs/heads/master
Commit: 53320c02916fdb72e4dfc093de855faabba6ea18
Parents: 546f4d0
Author: ggregory <gg...@apache.org>
Authored: Mon Aug 31 10:58:08 2015 -0700
Committer: ggregory <gg...@apache.org>
Committed: Mon Aug 31 10:58:08 2015 -0700
----------------------------------------------------------------------
log4j-core/pom.xml | 6 +
.../mom/kafka/DefaultKafkaProducerFactory.java | 32 +
.../core/appender/mom/kafka/KafkaAppender.java | 96 +
.../core/appender/mom/kafka/KafkaManager.java | 88 +
.../mom/kafka/KafkaProducerFactory.java | 28 +
log4j-core/src/site/xdoc/index.xml | 1 +
.../appender/mom/kafka/KafkaAppenderTest.java | 101 +
.../src/test/resources/KafkaAppenderTest.xml | 34 +
pom.xml | 7 +-
src/changes/changes.xml | 3 +
src/site/xdoc/manual/appenders.xml | 6502 +++++++++---------
11 files changed, 3688 insertions(+), 3210 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/pom.xml
----------------------------------------------------------------------
diff --git a/log4j-core/pom.xml b/log4j-core/pom.xml
index 79ab281..92a1023 100644
--- a/log4j-core/pom.xml
+++ b/log4j-core/pom.xml
@@ -108,6 +108,12 @@
<scope>provided</scope>
<optional>true</optional>
</dependency>
+ <!-- Used for Kafka appender -->
+ <dependency>
+ <groupId>org.apache.kafka</groupId>
+ <artifactId>kafka-clients</artifactId>
+ <optional>true</optional>
+ </dependency>
<!-- Used for compressing to formats other than zip and gz -->
<dependency>
<groupId>org.apache.commons</groupId>
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/DefaultKafkaProducerFactory.java
----------------------------------------------------------------------
diff --git a/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/DefaultKafkaProducerFactory.java b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/DefaultKafkaProducerFactory.java
new file mode 100644
index 0000000..88c74f4
--- /dev/null
+++ b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/DefaultKafkaProducerFactory.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache license, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the license for the specific language governing permissions and
+ * limitations under the license.
+ */
+
+package org.apache.logging.log4j.core.appender.mom.kafka;
+
+import java.util.Properties;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.Producer;
+
+public class DefaultKafkaProducerFactory implements KafkaProducerFactory {
+
+ @Override
+ public Producer<byte[], byte[]> newKafkaProducer(final Properties config) {
+ return new KafkaProducer<>(config);
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppender.java
----------------------------------------------------------------------
diff --git a/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppender.java b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppender.java
new file mode 100644
index 0000000..2e6c338
--- /dev/null
+++ b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppender.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache license, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the license for the specific language governing permissions and
+ * limitations under the license.
+ */
+
+package org.apache.logging.log4j.core.appender.mom.kafka;
+
+import java.io.Serializable;
+import java.nio.charset.StandardCharsets;
+
+import org.apache.logging.log4j.core.Filter;
+import org.apache.logging.log4j.core.Layout;
+import org.apache.logging.log4j.core.LogEvent;
+import org.apache.logging.log4j.core.appender.AbstractAppender;
+import org.apache.logging.log4j.core.appender.AppenderLoggingException;
+import org.apache.logging.log4j.core.config.Property;
+import org.apache.logging.log4j.core.config.plugins.Plugin;
+import org.apache.logging.log4j.core.config.plugins.PluginAttribute;
+import org.apache.logging.log4j.core.config.plugins.PluginElement;
+import org.apache.logging.log4j.core.config.plugins.PluginFactory;
+import org.apache.logging.log4j.core.config.plugins.validation.constraints.Required;
+import org.apache.logging.log4j.core.util.Booleans;
+
+/**
+ * Appender to send log events to an Apache Kafka topic.
+ */
+@Plugin(name = "Kafka", category = "Core", elementType = "appender", printObject = true)
+public final class KafkaAppender extends AbstractAppender {
+
+ /**
+ *
+ */
+ private static final long serialVersionUID = 1L;
+ @PluginFactory
+ public static KafkaAppender createAppender(
+ @PluginElement("Layout") final Layout<? extends Serializable> layout,
+ @PluginElement("Filter") final Filter filter,
+ @Required(message = "No name provided for KafkaAppender") @PluginAttribute("name") final String name,
+ @PluginAttribute(value = "ignoreExceptions", defaultBoolean = true) final String ignore,
+ @Required(message = "No topic provided for KafkaAppender") @PluginAttribute("topic") final String topic,
+ @PluginElement("Properties") final Property[] properties) {
+ final boolean ignoreExceptions = Booleans.parseBoolean(ignore, true);
+ final KafkaManager kafkaManager = new KafkaManager(name, topic, properties);
+ return new KafkaAppender(name, layout, filter, ignoreExceptions, kafkaManager);
+ }
+
+ private final KafkaManager manager;
+
+ private KafkaAppender(final String name, final Layout<? extends Serializable> layout, final Filter filter, final boolean ignoreExceptions, final KafkaManager manager) {
+ super(name, filter, layout, ignoreExceptions);
+ this.manager = manager;
+ }
+
+ @Override
+ public void append(final LogEvent event) {
+ if (event.getLoggerName().startsWith("org.apache.kafka")) {
+ LOGGER.warn("Recursive logging from [{}] for appender [{}].", event.getLoggerName(), getName());
+ } else {
+ try {
+ if (getLayout() != null) {
+ manager.send(getLayout().toByteArray(event));
+ } else {
+ manager.send(event.getMessage().getFormattedMessage().getBytes(StandardCharsets.UTF_8));
+ }
+ } catch (final Exception e) {
+ LOGGER.error("Unable to write to Kafka [{}] for appender [{}].", manager.getName(), getName(), e);
+ throw new AppenderLoggingException("Unable to write to Kafka in appender: " + e.getMessage(), e);
+ }
+ }
+ }
+
+ @Override
+ public void start() {
+ super.start();
+ manager.startup();
+ }
+
+ @Override
+ public void stop() {
+ super.stop();
+ manager.release();
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaManager.java
----------------------------------------------------------------------
diff --git a/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaManager.java b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaManager.java
new file mode 100644
index 0000000..64797c8
--- /dev/null
+++ b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaManager.java
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache license, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the license for the specific language governing permissions and
+ * limitations under the license.
+ */
+
+package org.apache.logging.log4j.core.appender.mom.kafka;
+
+import java.util.Properties;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import org.apache.kafka.clients.producer.Producer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.logging.log4j.core.appender.AbstractManager;
+import org.apache.logging.log4j.core.config.Property;
+
+public class KafkaManager extends AbstractManager {
+
+ public static final String DEFAULT_TIMEOUT_MILLIS = "30000";
+
+ /**
+ * package-private access for testing.
+ */
+ static KafkaProducerFactory producerFactory = new DefaultKafkaProducerFactory();
+
+ private final Properties config = new Properties();
+ private Producer<byte[], byte[]> producer = null;
+ private final int timeoutMillis;
+
+ private final String topic;
+
+ public KafkaManager(final String name, final String topic, final Property[] properties) {
+ super(name);
+ this.topic = topic;
+ config.setProperty("key.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");
+ config.setProperty("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");
+ config.setProperty("batch.size", "0");
+ for (final Property property : properties) {
+ config.setProperty(property.getName(), property.getValue());
+ }
+ this.timeoutMillis = Integer.parseInt(config.getProperty("timeout.ms", DEFAULT_TIMEOUT_MILLIS));
+ }
+
+ @Override
+ public void releaseSub() {
+ if (producer != null) {
+ // This thread is a workaround for this Kafka issue: https://issues.apache.org/jira/browse/KAFKA-1660
+ final Thread closeThread = new Thread(new Runnable() {
+ @Override
+ public void run() {
+ producer.close();
+ }
+ });
+ closeThread.setName("KafkaManager-CloseThread");
+ closeThread.setDaemon(true); // avoid blocking JVM shutdown
+ closeThread.start();
+ try {
+ closeThread.join(timeoutMillis);
+ } catch (final InterruptedException ignore) {
+ // ignore
+ }
+ }
+ }
+
+ public void send(final byte[] msg) throws ExecutionException, InterruptedException, TimeoutException {
+ if (producer != null) {
+ producer.send(new ProducerRecord<byte[], byte[]>(topic, msg)).get(timeoutMillis, TimeUnit.MILLISECONDS);
+ }
+ }
+
+ public void startup() {
+ producer = producerFactory.newKafkaProducer(config);
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaProducerFactory.java
----------------------------------------------------------------------
diff --git a/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaProducerFactory.java b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaProducerFactory.java
new file mode 100644
index 0000000..d9c56be
--- /dev/null
+++ b/log4j-core/src/main/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaProducerFactory.java
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache license, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the license for the specific language governing permissions and
+ * limitations under the license.
+ */
+
+package org.apache.logging.log4j.core.appender.mom.kafka;
+
+import java.util.Properties;
+
+import org.apache.kafka.clients.producer.Producer;
+
+public interface KafkaProducerFactory {
+
+ Producer<byte[], byte[]> newKafkaProducer(Properties config);
+
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/src/site/xdoc/index.xml
----------------------------------------------------------------------
diff --git a/log4j-core/src/site/xdoc/index.xml b/log4j-core/src/site/xdoc/index.xml
index e01d9fc..2884cdd 100644
--- a/log4j-core/src/site/xdoc/index.xml
+++ b/log4j-core/src/site/xdoc/index.xml
@@ -48,6 +48,7 @@
<li>SMTPAppender requires Javax Mail.</li>
<li>JMSQueueAppender and JMSTopicAppender require a JMS implementation like
<a href="http://activemq.apache.org/">Apache ActiveMQ</a>.</li>
+ <li>Kafka appender requires <a href="http://search.maven.org/#artifactdetails|org.apache.kafka|kafka-clients|0.8.2.1|jar">Kafka client library</a></li>
<li>Windows color support requires <a href="http://jansi.fusesource.org/">Jansi</a>.</li>
<li>The JDBC Appender requires a JDBC driver for the database you choose to write events to.</li>
<li>The JPA Appender requires the Java Persistence API classes, a JPA provider implementation,
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/src/test/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppenderTest.java
----------------------------------------------------------------------
diff --git a/log4j-core/src/test/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppenderTest.java b/log4j-core/src/test/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppenderTest.java
new file mode 100644
index 0000000..d9544e3
--- /dev/null
+++ b/log4j-core/src/test/java/org/apache/logging/log4j/core/appender/mom/kafka/KafkaAppenderTest.java
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache license, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the license for the specific language governing permissions and
+ * limitations under the license.
+ */
+
+package org.apache.logging.log4j.core.appender.mom.kafka;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.kafka.clients.producer.MockProducer;
+import org.apache.kafka.clients.producer.Producer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.logging.log4j.Level;
+import org.apache.logging.log4j.core.Appender;
+import org.apache.logging.log4j.core.impl.Log4jLogEvent;
+import org.apache.logging.log4j.junit.LoggerContextRule;
+import org.apache.logging.log4j.message.SimpleMessage;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+
+public class KafkaAppenderTest {
+
+ private static final MockProducer kafka = new MockProducer();
+
+ private static final String LOG_MESSAGE = "Hello, world!";
+ private static final String TOPIC_NAME = "kafka-topic";
+
+ private static Log4jLogEvent createLogEvent() {
+ return Log4jLogEvent.newBuilder()
+ .setLoggerName(KafkaAppenderTest.class.getName())
+ .setLoggerFqcn(KafkaAppenderTest.class.getName())
+ .setLevel(Level.INFO)
+ .setMessage(new SimpleMessage(LOG_MESSAGE))
+ .build();
+ }
+
+ @BeforeClass
+ public static void setUpClass() throws Exception {
+ KafkaManager.producerFactory = new KafkaProducerFactory() {
+ @Override
+ public Producer<byte[], byte[]> newKafkaProducer(final Properties config) {
+ return kafka;
+ }
+ };
+ }
+
+ @Rule
+ public LoggerContextRule ctx = new LoggerContextRule("KafkaAppenderTest.xml");
+
+ @Before
+ public void setUp() throws Exception {
+ kafka.clear();
+ }
+
+ @Test
+ public void testAppend() throws Exception {
+ final Appender appender = ctx.getRequiredAppender("KafkaAppender");
+ appender.append(createLogEvent());
+ final List<ProducerRecord<byte[], byte[]>> history = kafka.history();
+ assertEquals(1, history.size());
+ final ProducerRecord<byte[], byte[]> item = history.get(0);
+ assertNotNull(item);
+ assertEquals(TOPIC_NAME, item.topic());
+ assertNull(item.key());
+ assertEquals(LOG_MESSAGE, new String(item.value(), StandardCharsets.UTF_8));
+ }
+
+ @Test
+ public void testAppendWithLayout() throws Exception {
+ final Appender appender = ctx.getRequiredAppender("KafkaAppenderWithLayout");
+ appender.append(createLogEvent());
+ final List<ProducerRecord<byte[], byte[]>> history = kafka.history();
+ assertEquals(1, history.size());
+ final ProducerRecord<byte[], byte[]> item = history.get(0);
+ assertNotNull(item);
+ assertEquals(TOPIC_NAME, item.topic());
+ assertNull(item.key());
+ assertEquals("[" + LOG_MESSAGE + "]", new String(item.value(), StandardCharsets.UTF_8));
+ }
+
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/log4j-core/src/test/resources/KafkaAppenderTest.xml
----------------------------------------------------------------------
diff --git a/log4j-core/src/test/resources/KafkaAppenderTest.xml b/log4j-core/src/test/resources/KafkaAppenderTest.xml
new file mode 100644
index 0000000..758c426
--- /dev/null
+++ b/log4j-core/src/test/resources/KafkaAppenderTest.xml
@@ -0,0 +1,34 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ ~ Licensed to the Apache Software Foundation (ASF) under one or more
+ ~ contributor license agreements. See the NOTICE file distributed with
+ ~ this work for additional information regarding copyright ownership.
+ ~ The ASF licenses this file to You under the Apache license, Version 2.0
+ ~ (the "License"); you may not use this file except in compliance with
+ ~ the License. You may obtain a copy of the License at
+ ~
+ ~ http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an "AS IS" BASIS,
+ ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ~ See the license for the specific language governing permissions and
+ ~ limitations under the license.
+ -->
+<Configuration name="KafkaAppenderTest" status="OFF">
+ <Appenders>
+ <Kafka name="KafkaAppender" topic="kafka-topic">
+ <Property name="bootstrap.servers">localhost:9092</Property>
+ </Kafka>
+ <Kafka name="KafkaAppenderWithLayout" topic="kafka-topic">
+ <PatternLayout pattern="[%m]"/>
+ <Property name="bootstrap.servers">localhost:9092</Property>
+ </Kafka>
+ </Appenders>
+ <Loggers>
+ <Root level="info">
+ <AppenderRef ref="KafkaAppender"/>
+ <AppenderRef ref="KafkaAppenderWithLayout"/>
+ </Root>
+ </Loggers>
+</Configuration>
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 03f3362..9a74e24 100644
--- a/pom.xml
+++ b/pom.xml
@@ -559,7 +559,12 @@
</exclusion>
</exclusions>
</dependency>
- <dependency>
+ <dependency>
+ <groupId>org.apache.kafka</groupId>
+ <artifactId>kafka-clients</artifactId>
+ <version>0.8.2.1</version>
+ </dependency>
+ <dependency>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
<version>2.5</version>
http://git-wip-us.apache.org/repos/asf/logging-log4j2/blob/53320c02/src/changes/changes.xml
----------------------------------------------------------------------
diff --git a/src/changes/changes.xml b/src/changes/changes.xml
index 76d4af9..7bf9bd9 100644
--- a/src/changes/changes.xml
+++ b/src/changes/changes.xml
@@ -31,6 +31,9 @@
Added support for Java 8 lambda expressions to lazily construct a log message only if
the requested log level is enabled.
</action>
+ <action issue="LOG4J2-1107" dev="ggregory" type="add" due-to="Mikael Ståldal">
+ New Appender for Apache Kafka.
+ </action>
<action issue="LOG4J2-812" dev="rgoers" type="update">
PatternLayout timestamp formatting performance improvement: replaced synchronized SimpleDateFormat with
Apache Commons FastDateFormat. This and better caching resulted in a ~3-30X faster timestamp formatting.