You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@trafficserver.apache.org by ig...@apache.org on 2011/05/17 01:01:38 UTC

svn commit: r1103947 [2/2] - in /trafficserver/site/branches/ats-cms/content/docs/trunk/admin: configuration-files/ event-logging-formats/ security-options/ traffic-line-commands/ traffic-server-error-messages/ working-log-files/

Modified: trafficserver/site/branches/ats-cms/content/docs/trunk/admin/working-log-files/index.en.mdtext
URL: http://svn.apache.org/viewvc/trafficserver/site/branches/ats-cms/content/docs/trunk/admin/working-log-files/index.en.mdtext?rev=1103947&r1=1103946&r2=1103947&view=diff
==============================================================================
--- trafficserver/site/branches/ats-cms/content/docs/trunk/admin/working-log-files/index.en.mdtext (original)
+++ trafficserver/site/branches/ats-cms/content/docs/trunk/admin/working-log-files/index.en.mdtext Mon May 16 23:01:38 2011
@@ -22,7 +22,7 @@ it receives and every error it detects.
 
 This chapter discusses the following topics: 
 
-* [Understanding Traffic Server Log Files](#UnderstandingTrafficEdgeLogFiles)
+* [Understanding Traffic Server Log Files](#UnderstandingTSLogFiles)
 * [Understanding Event Log Files](#UnderstandingEventLogFiles)
 * [Managing Event Log Files](#ManagingEventLogFiles)
 * [Choosing Event Log File Formats](#ChoosingEventLogFileFormats)
@@ -32,7 +32,6 @@ This chapter discusses the following top
 * [Viewing Logging Statistics](#ViewingLoggingStatistics)
 * [Viewing Log Files](#ViewingLogFiles)
 * [Example Event Log File Entries](#ExampleEventLogFileEntries)
-* [Support for Traditional Custom Logging](#SupportTraditionalCustomLogging)
 
 ## Understanding Traffic Server Log Files ## {#UnderstandingTSLogFiles}
 
@@ -41,12 +40,28 @@ processes and every error it detects in 
 types of log files: 
 
 * **Error log files** record information about why a particular transaction was in error. 
-* **Event log files** (also called **access log files**) record information about the state of each transaction Traffic Server processes. 
-* **System log files** record system information, including messages about the state of Traffic Server and errors/warnings it produces. This kind of information might include a note that event log files were rolled, a warning that cluster communication timed out, or an error indicating that Traffic Server was restarted.   
- All system information messages are logged with the system-wide logging facility **`syslog`** under the daemon facility. The `syslog.conf` configuration file (stored in the `/etc` directory) specifies where these messages are logged. A typical location is `/var/log/messages` (Linux).   
- The `syslog` process works on a system-wide basis, so it serves as the single repository for messages from all Traffic Server processes (including `traffic_server`, `traffic_manager`, and `traffic_cop`).   
- System information logs observe a static format. Each log entry in the log contains information about the date and time the error was logged, the hostname of the Traffic Server that reported the error, and a description of the error or warning.   
- Refer to [Traffic Server Error Messages](errors.htm) for a list of the messages logged by Traffic Server. 
+
+* **Event log files** (also called **access log files**) record information about the state
+  of each transaction Traffic Server processes. 
+
+* **System log files** record system information, including messages about the state of
+  Traffic Server and errors/warnings it produces. This kind of information might include a
+  note that event log files were rolled, a warning that cluster communication timed out, or
+  an error indicating that Traffic Server was restarted.   
+
+      All system information messages are logged with the system-wide logging facility **`syslog`**
+  under the daemon facility. The `syslog.conf` configuration file (stored in the `/etc` directory)
+  specifies where these messages are logged. A typical location is `/var/log/messages` (Linux).   
+
+      The `syslog` process works on a system-wide basis, so it serves as the single repository for
+  messages from all Traffic Server processes (including `traffic_server`, `traffic_manager`, and `traffic_cop`).   
+  
+      System information logs observe a static format. Each log entry in the log contains
+  information about the date and time the error was logged, the hostname of the Traffic
+  Server that reported the error, and a description of the error or warning.   
+
+      Refer to [Traffic Server Error Messages](../traffic-server-error-messages) for a list of the messages
+      logged by Traffic Server. 
 
 By default, Traffic Server creates both error and event log files and records 
 system information in system log files. You can disable event logging and/or 
@@ -74,21 +89,46 @@ or when they reach a certain size.
 The following sections describe the Traffic Server logging system features 
 and discuss how to:
 
-* **Manage your event log files**  
- You can choose a central location for storing log files, set how much disk space to use for log files, and set how and when to roll log files. Refer to [Managing Event Log Files](#ManagingEventLogFiles). 
-* **Choose different event log file formats **  
- You can choose which standard log file formats you want to use for traffic analysis, such as Squid or Netscape. Alternatively, you can use the Traffic Server custom format, which is XML-based and enables you to institute more control over the type of information recorded in log files. Refer to [Choosing Event Log File Formats](#ChoosingEventLogFileFormats).   
-
-* **Roll event log files automatically**  
- Configure Traffic Server to roll event log files at specific intervals during the day or when they reach a certain size; this enables you to identify and manipulate log files that are no longer active. Refer to [Rolling Event Log Files](#RollingEventLogFiles). 
-* **Separate log files according to protocols and hosts**  
- Configure Traffic Server to create separate log files for different protocols. You can also configure Traffic Server to generate separate log files for requests served by different hosts. Refer to [Splitting Event Log Files](#SplittingEventLogFiles). 
-* **Collate log files from different Traffic Server nodes**  
- Designate one or more nodes on the network to serve as log collation servers. These servers, which might be standalone or part of Traffic Server, enable you to keep all logged information in well-defined locations. Refer to [Collating Event Log Files](#CollatingEventLogFiles). 
-* **View statistics about the logging system**  
- Traffic Server provides statistics about the logging system; you can access these statistics via Traffic Line. Refer to [Viewing Logging Statistics](#ViewingLoggingStatistics). 
-* **Interpret log file entries for the log file formats**  
- Refer to [Example Event Log File Entries](#ExampleEventLogFileEntries). 
+* **Manage your event log files**
+
+      You can choose a central location for storing log files, set how much disk space
+  to use for log files, and set how and when to roll log files.
+  Refer to [Managing Event Log Files](#ManagingEventLogFiles). 
+
+* **Choose different event log file formats**
+
+      You can choose which standard log file formats you want to use for traffic analysis,
+  such as Squid or Netscape. Alternatively, you can use the Traffic Server custom format,
+  which is XML-based and enables you to institute more control over the type of information
+  recorded in log files. Refer to [Choosing Event Log File Formats](#ChoosingEventLogFileFormats).   
+
+* **Roll event log files automatically**
+
+      Configure Traffic Server to roll event log files at specific intervals during the day or when
+  they reach a certain size; this enables you to identify and manipulate log files that are no
+  longer active. Refer to [Rolling Event Log Files](#RollingEventLogFiles). 
+
+* **Separate log files according to protocols and hosts**
+
+      Configure Traffic Server to create separate log files for different protocols. You can also
+  configure Traffic Server to generate separate log files for requests served by different hosts.
+  Refer to [Splitting Event Log Files](#SplittingEventLogFiles). 
+
+* **Collate log files from different Traffic Server nodes**
+
+      Designate one or more nodes on the network to serve as log collation servers. These servers,
+  which might be standalone or part of Traffic Server, enable you to keep all logged information
+  in well-defined locations. Refer to [Collating Event Log Files](#CollatingEventLogFiles). 
+
+* **View statistics about the logging system**
+
+      Traffic Server provides statistics about the logging system; you can access these statistics
+  via Traffic Line. Refer to [Viewing Logging Statistics](#ViewingLoggingStatistics). 
+
+* **Interpret log file entries for the log file formats**
+
+      Refer to [Example Event Log File Entries](#ExampleEventLogFileEntries). 
+
 
 ## Managing Event Log Files ## {#ManagingEventLogFiles}
 
@@ -113,8 +153,17 @@ When the free space dwindles to the head
 Options](#SettingLogFileManagementOptions)), it enters a low space state and 
 takes the following actions: 
 
-* If the autodelete option (discussed in [Rolling Event Log Files](#RollingEventLogFiles)) is _enabled_, then Traffic Server identifies previously-rolled log files (i.e., log files with the `.old` extension). It starts deleting files one by one, beginning with the oldest file, until it emerges from the low state. Traffic Server logs a record of all deleted files in the system error log. 
-* If the autodelete option is _disabled_ or there are not enough old log files to delete for the system to emerge from its low space state, then Traffic Server issues a warning and continues logging until space is exhausted. When avilable space is consumed, event logging stops. Traffic Server resumes event logging when enough space becomes available for it to exit the low space state. To make space available, either explicitly increase the logging space limit or remove files from the logging directory manually. 
+* If the autodelete option (discussed in [Rolling Event Log Files](#RollingEventLogFiles))
+  is _enabled_, then Traffic Server identifies previously-rolled log files (i.e., log files
+  with the `.old` extension). It starts deleting files one by one, beginning with the oldest
+  file, until it emerges from the low state. Traffic Server logs a record of all deleted files in the system error log. 
+
+* If the autodelete option is _disabled_ or there are not enough old log files to delete
+  for the system to emerge from its low space state, then Traffic Server issues a warning
+  and continues logging until space is exhausted. When avilable space is consumed, event
+  logging stops. Traffic Server resumes event logging when enough space becomes available
+  for it to exit the low space state. To make space available, either explicitly increase
+  the logging space limit or remove files from the logging directory manually. 
 
 You can run a `cron` script in conjunction with Traffic Server to automatically 
 remove old log files from the logging directory before Traffic Server enters 
@@ -126,20 +175,10 @@ the logs and move to an archive location
 
 To set log management options, follow the steps below:
 
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. Edit the following variables: 
-3. **Variable** **Description** 
-_`proxy.config.log.logfile_dir`_
-:   Specify the path to the directory in which you want to store event log files. This can be an absolute path or a path relative to the directory in which Traffic Server is installed. The default is `logs` located in the Traffic Server installation directory.  
-		**Note:** The directory you specify must already exist.
-_`proxy.config.log.max_space_mb_for_logs`_
-:   Enter the maximum amount of space you want to allocate to the logging directory. The default value is 2000 MB.  
-		**Note:** All files in the logging directory contribute to the space used, even if they are not log files.
-_`proxy.config.log.max_space_mb_headroom`_
-:   Enter the tolerance for the log space limit. The default value is 10 MB.
-
-4. Save and close the `records.config` file. 
-5. Navigate to the Traffic Server `bin` directory. 
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variables
+    * [_`proxy.config.log.logfile_dir`_](../configuration-files/records.config#proxy.config.log.logfile_dir)
+    * [_`proxy.config.log.max_space_mb_for_logs`_](../configuration-files/records.config#proxy.config.log.max_space_mb_for_logs)
+    * [_`proxy.config.log.max_space_mb_headroom`_](../configuration-files/records.config#proxy.config.log.max_space_mb_headroom)
 6. Run the command `traffic_line -x` to apply the configuration changes.
 
 ## Choosing Event Log File Formats ## {#ChoosingEventLogFileFormats}
@@ -164,65 +203,27 @@ do not provide. Refer to [Using the Cust
 
 Set standard log file format options by following the steps below:
 
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. To use the Squid format, edit the following variables:
-3. **Variable** **Description** 
-`_proxy.config.log.squid_log_enabled_`
-:   Set this variable to 1 to enable the Squid log file format.
-`_proxy.config.log.squid_log_is_ascii_`
-:   Set this variable to 1 to enable ASCII mode.  
-		Set this variable to 0 to enable binary mode.
-`_proxy.config.log.squid_log_name_`
-:   Enter the name you want to use for Squid event log files. The default is `squid`. 
-		
-`_proxy.config.log.squid_log_header_`
-:   Enter the header text you want to display at the top of the Squid log files. 
-		Enter `NULL` if you do not want to use a header. 
-
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variables
+2. Edit the following variables to use the Squid format:
+    * [_`proxy.config.log.squid_log_enabled`_](../configuration-files/records.config#proxy.config.log.squid_log_enabled)
+    * [_`proxy.config.log.squid_log_is_ascii`_](../configuration-files/records.config#proxy.config.log.squid_log_is_ascii)
+    * [_`proxy.config.log.squid_log_name`_](../configuration-files/records.config#proxy.config.log.squid_log_name)
+    * [_`proxy.config.log.squid_log_header`_](../configuration-files/records.config#proxy.config.log.squid_log_header)
 4. To use the Netscape Common format, edit the following variables: 
-5. **Variable** **Description** 
-`_proxy.config.log.common_log_enabled_`
-:   Set this variable to 1 to enable the Netscape Common log file format.
-`proxy.config.log.common_log_is_ascii`
-:   Set this variable to 1 to enable ASCII mode.  
-		Set this variable to 0 to enable binary mode.
-`_proxy.config.log.common_log_name_`
-:   Enter the name you want to use for Netscape Common event log files. The default 
-		is `common`.
-`_proxy.config.log.common_log_header_`
-:   Enter the header text you want to display at the top of the Netscape Common 
-		log files. Enter `NULL` if you do not want to use a header.
-
+    * [_`proxy.config.log.common_log_enabled`_](../configuration-files/records.config#proxy.config.log.common_log_enabled)
+    * [_`proxy.config.log.common_log_is_ascii`_](../configuration-files/records.config#proxy.config.log.common_log_is_ascii)
+    * [_`proxy.config.log.common_log_name`_](../configuration-files/records.config#proxy.config.log.common_log_name)
+    * [_`proxy.config.log.common_log_header`_](../configuration-files/records.config#proxy.config.log.common_log_header)
 6. To use the Netscape Extended format, edit the following variables: 
-7. **Variable** **Description** 
-`_proxy.config.log.extended_log_enabled_`
-:   Set this variable to 1 to enable the Netscape Extended log file format.
-`_proxy.config.log.extended_log_is_ascii_`
-:   Set this variable to 1 to enable ASCII mode.  
-		Set this variable to 0 to enable binary mode.
-`_proxy.config.log.extended_log_name_`
-:   Enter the name you want to use for Netscape Extended event log files. The default 
-		is `extended`.
-`_proxy.config.log.extended_log_header_`
-:   Enter the header text you want to display at the top of the Netscape Extended 
-		log files. Enter `NULL` if you do not want to use a header.
-
+    * [_`proxy.config.log.extended_log_enabled`_](../configuration-files/records.config#proxy.config.log.extended_log_enabled)
+    * [_`proxy.config.log.extended_log_is_ascii`_](../configuration-files/records.config#proxy.config.log.extended_log_is_ascii)
+    * [_`proxy.config.log.extended_log_name`_](../configuration-files/records.config#proxy.config.log.extended_log_name)
+    * [_`proxy.config.log.extended_log_header`_](../configuration-files/records.config#proxy.config.log.extended_log_header)
 8. To use the Netscape Extended-2 format, edit the following variables:
-9. **Variable** **Description** 
-`_proxy.config.log.extended2_log_enabled_`
-:   Set this variable to 1 to enable the Netscape Extended-2 log file format.
-`_proxy.config.log.extended2_log_is_ascii_`
-:   Set this variable to 1 to enable ASCII mode.  
-		Set this variable to 0 to enable binary mode.
-`_proxy.config.log.extended2_log_name_`
-:   Enter the name you want to use for Netscape Extended-2 event log files. The 
-		default is `extended2`.
-`_proxy.config.log.extended2_log_header_`
-:   Enter the header text you want to display at the top of the Netscape Extended-2 
-		log files. Enter `NULL` if you do not want to use a header.
-
-10. Save and close the `records.config` file. 
-11. Navigate to the Traffic Server `bin` directory.
+    * [_`proxy.config.log.extended2_log_enabled`_](../configuration-files/records.config#proxy.config.log.extended2_log_enabled)
+    * [_`proxy.config.log.extended2_log_is_ascii`_](../configuration-files/records.config#proxy.config.log.extended2_log_is_ascii)
+    * [_`proxy.config.log.extended2_log_name`_](../configuration-files/records.config#proxy.config.log.extended2_log_name)
+    * [_`proxy.config.log.extended2_log_header`_](../configuration-files/records.config#proxy.config.log.extended2_log_header)
 12. Run the command `traffic_line -x` to apply the configuration changes. 
 
 ### Using the Custom Format ### {#UsingCustomFormat}
@@ -241,41 +242,39 @@ of objects to create custom log files, a
 log format, you must specify at least one `LogObject` definition (one log file 
 is produced for each `LogObject` definition). 
 
-* The `**LogFormat**` object defines the content of the log file using printf-style format strings. 
-* The `**LogFilter**` object defines a filter so that you include or exclude certain information from the log file. 
-* The `**LogObject**` object specifies all the information needed to produce a log file. Items marked with an asterisk (\*) are required.
-* \*The name of the log file. 
-  \*The format to be used. This can be a standard format (Squid or Netscape) or 
-a previously-defined custom format (i.e., a previously-defined `LogFormat` 
-object). 
-  The file mode: `ASCII`, `Binary`, or `ASCII_PIPE`. The default is `ASCII`.   
- The `ASCII_PIPE` mode writes log entries to a UNIX-named pipe (a buffer in memory); other processes can then read the data using standard I/O functions. The advantage of this option is that Traffic Server does not have to write to disk, which frees disk space and bandwidth for other tasks. When the buffer is full, Traffic Server drops log entries and issues an error message indicating how many entries were dropped. Because Traffic Server only writes complete log entries to the pipe, only full records are dropped. 
-  Any filters you want to use (i.e., previously-defined `LogFilter` objects). 
- 
-  The collation servers that are to receive the log files. 
-  The protocols you want to log. If the protocols tag is used, then Traffic Server 
-will only log transactions from the protocols listed; otherwise, all transactions 
-for all protocols are logged. 
-  The origin servers you want to log. If the `servers` tag is used, then Traffic 
-Server will only log transactions for the origin servers listed; otherwise, 
-transactions for all origin servers are logged. 
-  The header text you want the log files to contain. The header text appears 
-at the beginning of the log file, just before the first record. 
-  The log file rolling options.
+* The **`LogFormat`** object defines the content of the log file using printf-style format strings. 
+* The **`LogFilter`** object defines a filter so that you include or exclude certain information from the log file. 
+* The **`LogObject`** object specifies all the information needed to produce a log file.
+    * The name of the log file.  (required)
+    * The format to be used (required). This can be a standard format (Squid or Netscape) or  
+    * a previously-defined custom format (i.e., a previously-defined `LogFormat` object). 
+    * The file mode: `ASCII`, `Binary`, or `ASCII_PIPE`. The default is `ASCII`.   
+      The `ASCII_PIPE` mode writes log entries to a UNIX-named pipe (a buffer in memory);
+      other processes can then read the data using standard I/O functions. The advantage of
+      this option is that Traffic Server does not have to write to disk, which frees disk
+      space and bandwidth for other tasks. When the buffer is full, Traffic Server drops
+      log entries and issues an error message indicating how many entries were dropped.
+      Because Traffic Server only writes complete log entries to the pipe, only full records are dropped. 
+    * Any filters you want to use (i.e., previously-defined `LogFilter` objects). 
+    * The collation servers that are to receive the log files. 
+    * The protocols you want to log. If the protocols tag is used, then Traffic Server 
+      will only log transactions from the protocols listed; otherwise, all transactions 
+      for all protocols are logged. 
+    * The origin servers you want to log. If the `servers` tag is used, then Traffic 
+      Server will only log transactions for the origin servers listed; otherwise, 
+      transactions for all origin servers are logged. 
+    * The header text you want the log files to contain. The header text appears 
+      at the beginning of the log file, just before the first record. 
+    * The log file rolling options.
  
 
 ##### To generate a custom log format:  ##### {#generateacustomlogformat}
 
-1. In a text editor, open the `records.config` file located in the Traffic Server `config` directory. 
-2. Edit the following variables: 
-3. **Variable** **Description** 
-`_proxy.config.log.custom_logs_enabled_`
-:   Set this variable to 1 to enable custom logging.
-4. Save and close the `records.config` file. 
-5. Open the `logs_xml.config` file located in the Traffic Server `config` directory. 
-6. Add `LogFormat`, `LogFilter`, and `LogObject` specifications to the configuration file. For detailed information about this file, see [logs_xml.config](files.htm#logs_xml.config).
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variables
+2. [_`proxy.config.log.custom_logs_enabled`_](../configuration-files/records.config#proxy.config.log.custom_logs_enabled)
+5. In the [`logs_xml.config`](../configuration-files/logs_xml.config) file 
+6. Add [`LogFormat`](../configuration-files/logs_xml.config#LogFormat), [`LogFilter`](../configuration-files/logs_xml.config#LogFilters), and [`LogObject`](../configuration-files/logs_xml.config#LogObject) specifications to the configuration file.
 7. Save and close the `log``s_xml.config` file. 
-8. Navigate to the Traffic Server `bin` directory. 
 9. Run the command `traffic_line -x` to apply your configuration changes. 
 
 #### Creating Summary Log Files  #### {#CreatingSummaryLogFiles}
@@ -286,64 +285,94 @@ you can configure Traffic Server to crea
 a set of log entries over a specified period of time. This can significantly 
 reduce the size of the log files generated. 
 
-To generate a summary log file, create a `LogFormat` object in the XML-based 
-logging configuration file (`logs_xml.config`) using the SQL-like aggregate 
-operators below. You can apply each of these operators to specific fields, 
+To generate a summary log file, create a [`LogFormat`](../configuration-files/logs_xml.config#LogFormat) object in the
+XML-based  logging configuration file ([`logs_xml.config`](../configuration-files/logs_xml.config))
+using the SQL-like aggregate operators below. You can apply each of these operators to specific fields, 
 over a specified interval.
 
-* `COUNT `
-* `SUM `
-* `AVERAGE `
-* `FIRST `
-* `LAST `
+* `COUNT`
+* `SUM`
+* `AVERAGE`
+* `FIRST`
+* `LAST`
 
 ##### To create a summary log file format:  ##### {#createasummarylogfileformat}
 
-1. Access the `logs_xml.config` file located in the Traffic Server `config` directory. 
-2. Define the format of the log file as follows:  
-`<LogFormat>  
- <Name = "summary"/>  
- <Format = "%<_operator_(_field_)> : %<_operator_(_field_)>"/>  
- <Interval = "_n_"/>  
- </Format>  
-` where _`operator`_ is one of the five aggregate operators (`COUNT`, `SUM`, `AVERAGE`, `FIRST`, `LAST`), _`field`_ is the logging field you want to aggregate, and `   _n_` is the interval (in seconds) between summary log entries. You can specify more than one `_operator_` in the format line. For more information, refer to [logs_xml.config](files.htm#logs_xml.config).  
-  
- The following example format generates one entry every 10 seconds. Each entry contains the timestamp of the last entry of the interval, a count of the number of entries seen within that 10-second interval, and the sum of all bytes sent to the client:   
-`<LogFormat>  
- <Name = "summary"/>  
- <Format = "%<LAST(cqts)> : %<COUNT(*)> : %<SUM(psql)>"/>  
- <Interval = "10"/>  
- </Format>`  
-  
-**IMPORTANT: **You cannot create a format specification that contains both aggregate operators and regular fields. For example, the following specification would be invalid:   
-`<Format = "%<LAST(cqts)> : %<COUNT(*)> : %<SUM(psql)> : %<cqu>"/>`  
+1. In the [`logs_xml.config`](../configuration-files/logs_xml.config) file define the format of the log file as follows:
+
+        :::xml
+        <LogFormat>  
+          <Name = "summary"/>  
+          <Format = "%<operator(field)> : %<operator(field)>"/>  
+          <Interval = "n"/>  
+        </LogFormat>  
+  where _`operator`_ is one of the five aggregate operators (`COUNT`, `SUM`, `AVERAGE`, `FIRST`, `LAST`),
+  _`field`_ is the logging field you want to aggregate, and _`n`_ is the interval (in seconds) between
+  summary log entries. You can specify more than one _`operator`_ in the format line.
+  For more information, refer to [`logs_xml.config`](../configuration-files/logs_xml.config).  
+
+3. Run the command `traffic_line -x` to apply configuration changes . 
+
+The following example format generates one entry every 10 seconds. Each entry contains the timestamp of
+the last entry of the interval, a count of the number of entries seen within that 10-second interval,
+and the sum of all bytes sent to the client:
+
+    :::xml
+    <LogFormat>
+      <Name = "summary"/>
+      <Format = "%<LAST(cqts)> : %<COUNT(*)> : %<SUM(psql)>"/>
+      <Interval = "10"/>
+    </LogFormat>
+
+**IMPORTANT:** You cannot create a format specification that contains both aggregate operators and
+regular fields. For example, the following specification would be **invalid**:
+
+    :::xml
+    <Format = "%<LAST(cqts)> : %<COUNT(*)> : %<SUM(psql)> : %<cqu>"/>
 
-3. Define a `LogObject` that uses this format. 
-4. Save your changes and close the `logs_xml.config` file. Run the command `traffic_line -x` from the Traffic Server `bin` directory to apply configuration changes . 
 
 ### Choosing Binary or ASCII  ### {#ChoosingBinaryASCII}
 
-You can configure the Traffic Server to create event log files in either of 
-the following: 
+You can configure Traffic Server to create event log files in either of the following: 
+
+* **ASCII**
 
-* **ASCII**  
- These files are human-readable and can be processed using standard, off-the-shelf log analysis tools. However, Traffic Server must perform additional processing to create the files in ASCII, which mildly impacts system overhead. ASCII files also tend to be larger than the equivalent binary files. By default, ASCII log files have a `.log` filename extension. 
-* **Binary**  
- These files generate lower system overhead and generally occupy less space on the disk than ASCII files (depending on the type of information being logged). However, you must use a converter application before you can read or analyze binary files via standard tools. By default, binary log files use a `.blog` filename extension. 
+    These files are human-readable and can be processed using standard, off-the-shelf
+    log analysis tools. However, Traffic Server must perform additional processing to
+    create the files in ASCII, which mildly impacts system overhead. ASCII files also
+    tend to be larger than the equivalent binary files.
+    By default, ASCII log files have a `.log` filename extension. 
 
-While binary log files typically require less disk space, there are exceptions.   
- For example: the value `0` (zero) requires only one byte to store in ASCII, but requires four bytes when stored as a binary integer. Conversely: if you define a custom format that logs IP addresses, then a binary log file would only require four bytes of storage per 32-bit address. However, the same IP address stored in dot notation would require around 15 characters (bytes) in an ASCII log file. Therefore, it's wise to consider the type of data that will be logged before you select ASCII or binary for your log files. For example, you might try logging for one day using ASCII and then another day using binary. If the number of requests is roughly the same for both days, then you can calculate a rough metric that compares the two formats. 
+* **Binary**
+    
+    These files generate lower system overhead and generally occupy less space on the
+    disk than ASCII files (depending on the type of information being logged).
+    However, you must use a converter application before you can read or analyze binary
+    files via standard tools.
+    By default, binary log files use a `.blog` filename extension. 
+
+While binary log files typically require less disk space, there are exceptions.
+
+For example: the value `0` (zero) requires only one byte to store in ASCII, but requires
+four bytes when stored as a binary integer. Conversely: if you define a custom format that
+logs IP addresses, then a binary log file would only require four bytes of storage per
+32-bit address. However, the same IP address stored in dot notation would require around
+15 characters (bytes) in an ASCII log file. Therefore, it's wise to consider the type of
+data that will be logged before you select ASCII or binary for your log files. For example,
+you might try logging for one day using ASCII and then another day using binary.
+If the number of requests is roughly the same for both days, then you can calculate a rough
+metric that compares the two formats. 
 
 For standard log formats, select Binary or ASCII (refer to [Setting Standard 
 Log File Format Options](#SettingStandardLogFileFormatOptions)). For the custom 
-log format, specify ASCII or Binary mode in the `LogObject` (refer to [Using 
-the Custom Format](#UsingCustomFormat)). In addition to the ASCII and binary 
+log format, specify ASCII or Binary mode in the [`LogObject`](../configuration-files/logs_xml.config#LogObject)
+(refer to [Using the Custom Format](#UsingCustomFormat)). In addition to the ASCII and binary 
 options, you can also write custom log entries to a UNIX-named pipe (i.e., 
 a buffer in memory). Other processes can then read the data using standard 
 I/O functions. The advantage of using this option is that Traffic Server does 
 not have to write to disk, which frees disk space and bandwidth for other tasks. 
 In addition, writing to a pipe does not stop when logging space is exhausted 
-because the pipe does not use disk space. Refer to [logs_xml.config](files.htm#logs_xml.config) 
+because the pipe does not use disk space. Refer to [`logs_xml.config`](../configuration-files/logs_xml.config) 
 for more information about the `ASCII_PIPE` option.
 
 ### Using logcat to Convert Binary Logs to ASCII  ### {#UsinglogcatConvertBinaryLogsASCII}
@@ -351,43 +380,56 @@ for more information about the `ASCII_PI
 You must convert a binary log file to ASCII before you can analyze it using 
 standard tools. 
 
-##### To convert a binary log file to ASCII:  ##### {#convertabinarylogfileASCII}
+##### Converting a binary log file to ASCII:  ##### {#convertabinarylogfileASCII}
 
-1. Navigate to the directory that contains the binary log file. 
-2. Make sure that the `logcat` utility is in your path. 
-3. Enter the following command: `logcat _options input_filename_...` The following table describes the command-line options.   
+To convert a binary log file to ASCII, use the `traffic_logcat` command
+The following is a description of its command-line options.   
   
-**Option** **Description** 
-`-o _output_file_`
+    Usage: traffic_logcat [-o output-file | -a] [-CEhSVw2] [input-file ...]
+
+`-o output_file`
 :   Specifies where the command output is directed.
+
 `-a`
-:   Automatically generates the output filename based on the input filename. If the input is from stdin, then this option is ignored. For example:  
-		`logcat -a squid-1.blog squid-2.blog squid-3.blog`  
-		generates  
-		`squid-1.log, squid-2.log, squid-3.log`
+:   Automatically generates the output filename based on the input filename.
+    If the input is from stdin, then this option is ignored. For example:
+
+         traffic_logcat -a squid-1.blog squid-2.blog squid-3.blog
+    generates 
+
+	 squid-1.log squid-2.log squid-3.log
+
 `-S`
 :   Attempts to transform the input to Squid format, if possible.
+
 `-C`
 :   Attempts to transform the input to Netscape Common format, if possible.
+
 `-E`
 :   Attempts to transform the input to Netscape Extended format, if possible.
+
 `-2`
 :   Attempt to transform the input to Netscape Extended-2 format, if possible. 
-		
+
   
-**Note: **Use only one of the following options at any given time: `-S`, `-C`, `-E`, or` -2`.   
- If no input files are specified, then `logcat` reads from the standard input (`stdin`). If you do not specify an output file, then `logcat` writes to the standard output (`stdout`). 
- For example, to convert a binary log file to an ASCII file, you can use the 
-`logcat` command with either of the following options below: 
-`logcat binary_file > ascii_file  
-logcat -o ascii_file binary_file`
+**Note:** Use only one of the following options at any given time: `-S`, `-C`, `-E`, or` -2`.   
+
+If no input files are specified, then `traffic_logcat` reads from the standard input (`stdin`).
+If you do not specify an output file, then `traffic_logcat` writes to the standard output (`stdout`). 
+
+For example, to convert a binary log file to an ASCII file, you can use the
+`traffic_logcat` command with either of the following options below: 
+
+    traffic_logcat binary_file > ascii_file
+    traffic_logcat -o ascii_file binary_file
 The binary log file is not modified by this command. 
 
 ## Rolling Event Log Files ## {#RollingEventLogFiles}
 
 Traffic Server provides automatic log file rolling. This means that at specific 
 intervals during the day or when log files reach a certain size, Traffic Server 
-closes its current set of log files and opens new log files. You should roll 
+closes its current set of log files and opens new log files. Depending on the amount
+of traffic your servers are exposed to, you should roll 
 log files several times a day. Rolling every six hours is a good guideline 
 to start with. 
 
@@ -406,11 +448,18 @@ renames the old file to include the foll
 
 * The format of the file (such as `squid.log`). 
 * The hostname of the Traffic Server that generated the log file. 
-* Two timestamps separated by a hyphen (-). The first timestamp is a **lower bound** for the timestamp of the first record in the log file. The lower bound is the time when the new buffer for log records is created. Under low load, the first timestamp in the filename can be different from the timestamp of the first entry. Under normal load, the first timestamp in the filename and the timestamp of the first entry are similar. The second timestamp is an **upper bound **for the timestamp of the last record in the log file (this is normally the rolling time). 
+* Two timestamps separated by a hyphen (`-`). The first timestamp is a **lower bound**
+  for the timestamp of the first record in the log file. The lower bound is the time
+  when the new buffer for log records is created. Under low load, the first timestamp
+  in the filename can be different from the timestamp of the first entry. Under normal
+  load, the first timestamp in the filename and the timestamp of the first entry are
+  similar. The second timestamp is an **upper bound** for the timestamp of the last
+  record in the log file (this is normally the rolling time). 
 * The suffix `.old`, which makes it easy for automated scripts to find rolled log files. 
 
-Timestamps have the following format:   
-`%Y%M%D.%Hh%Mm%Ss-%Y%M%D.%Hh%Mm%Ss`
+Timestamps have the following format:
+
+    %Y%M%D.%Hh%Mm%Ss-%Y%M%D.%Hh%Mm%Ss
 
 The following table describes the format: 
 
@@ -434,16 +483,24 @@ The following table describes the format
 
   
 
-The following is an example of a rolled log filename:   
-`squid.log.mymachine.20000912.12h00m00s-20000913.12h00m00s.old`  
+The following is an example of a rolled log filename:
+
+     squid.log.mymachine.20110912.12h00m00s-20000913.12h00m00s.old
   
- The logging system buffers log records before writing them to disk. When a log file is rolled, the log buffer might be partially full. If it is, then the first entry in the new log file will have a timestamp earlier than the time of rolling. When the new log file is rolled, its first timestamp will be a lower bound for the timestamp of the first entry. 
+The logging system buffers log records before writing them to disk.
+When a log file is rolled, the log buffer might be partially full.
+If it is, then the first entry in the new log file will have a timestamp
+earlier than the time of rolling. When the new log file is rolled, its first
+timestamp will be a lower bound for the timestamp of the first entry. 
+
+For example, suppose logs are rolled every three hours, and the first rolled log file is:
 
-For example, suppose logs are rolled every three hours, and the first rolled log file is:  
-`squid.log.mymachine.19980912.12h00m00s-19980912.03h00m00s.old`  
+    squid.log.mymachine.20110912.12h00m00s-19980912.03h00m00s.old
   
- If the lower bound for the first entry in the log buffer at 3:00:00 is 2:59:47, then the next log file will have the following timestamp when rolled:   
-`squid.log.mymachine.19980912.02h59m47s-19980912.06h00m00s.old`
+If the lower bound for the first entry in the log buffer at 3:00:00 is 2:59:47, then the
+next log file will have the following timestamp when rolled:
+
+    squid.log.mymachine.20110912.02h59m47s-19980912.06h00m00s.old
 
 The contents of a log file are always between the two timestamps. Log files 
 do not contain overlapping entries, even if successive timestamps appear to 
@@ -469,37 +526,20 @@ at 03:00 and 15:00 each day. 
 To set log file rolling options and/or configure Traffic Server to roll log 
 files when they reach a certain size, follow the steps below:
 
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. Edit the following variables: 
-3. **Variable** **Description** 
-`_proxy.config.log.rolling_enabled_`
-:   Set this variable to one of the following values:  
-		`**1**` to enable log file rolling at specific intervals during the day.  
-		`**2**` to enable log file rolling when log files reach a specific size.  
-		`**3**` to enable log file rolling at specific intervals during the day or when log files reach a specific size (whichever occurs first).  
-		`**4**` to enable log file rolling at specific intervals during the day when log files reach a specific size (at a specified time if the file is of the specified size).
-`_proxy.config.log.rolling_size_mb_`
-:   Specifies the size that log files must reach before rolling takes place.
-`_proxy.config.log.rolling_offset_hr_`
-:   Set this variable to the specific time each day you want log file rolling to 
-		take place. Traffic Server forces the log file to be rolled at the offset hour 
-		each day.
-`_proxy.config.log.rolling_interval_sec_`
-:   Set this variable to the rolling interval in seconds. The minimum value is 
-		300 seconds (5 minutes). The maximum value is 86400 seconds (one day).** Note:** 
-		If you start Traffic Server within a few minutes of the next rolling time, 
-		then rolling might not occur until the next rolling time.
-`_proxy.config.log.auto_delete_rolled_file_`
-:   Set this variable to 1 to enable autodeletion of rolled files.
-
-4. Save and close the `records.config` file. 
-5. Navigate to the Traffic Server `bin` directory. 
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variables
+    * [_`proxy.config.log.rolling_enabled`_](../configuration-files/records.config#proxy.config.log.rolling_enabled)
+    * [_`proxy.config.log.rolling_size_mb`_](../configuration-files/records.config#proxy.config.log.rolling_size_mb)
+    * [_`proxy.config.log.rolling_offset_hr`_](../configuration-files/records.config#proxy.config.log.rolling_offset_hr)
+    * [_`proxy.config.log.rolling_interval_sec`_](../configuration-files/records.config#proxy.config.log.rolling_interval_sec)
 6. Run the command `traffic_line -x` to apply the configuration changes.
 
-You can fine-tune log file rolling settings for a custom log file in the `LogObject` 
-specification in the `logs_xml.config` file. The custom log file uses the rolling 
-settings in its `LogObject`, which override the default settings you specify 
-in Traffic Manager or the `records.config` file described above. 
+You can fine-tune log file rolling settings for a custom log file in the
+[`LogObject`](../configuration-files/logs_xml.config#LogObject) 
+specification in the [`logs_xml.config`](../configuration-files/logs_xml.config) file.
+The custom log file uses the rolling settings in its
+[`LogObject`](../configuration-files/logs_xml.config#LogObject),
+which override the default settings you specify in Traffic Manager or the
+[`records.config`](../configuration-files/records.config) file described above. 
 
 ## Splitting Event Log Files ## {#SplittingEventLogFiles}
 
@@ -521,7 +561,7 @@ all ICP transactions in the same log fil
 HTTP host log splitting enables you to record HTTP transactions for different 
 origin servers in separate log files. When HTTP host log splitting is enabled, 
 Traffic Server creates a separate log file for each origin server that's listed 
-in the` `[log_hosts.config](#EditingLogHostsConfigFile) file. When both ICP 
+in the [`log_hosts.config`](#EditingLogHostsConfigFile) file. When both ICP 
 and HTTP host log splitting are enabled, Traffic Server generates separate 
 log files for HTTP transactions (based on the origin server) and places all 
 ICP transactions in their own respective log files. For example, if the `log_hosts.config` 
@@ -529,8 +569,6 @@ file contains the two origin servers `un
 format is enabled, then Traffic Server generates the following log files: 
 
   
-**Log Filename** **Description** 
-
 `squid-uni.edu.log`
 :   All HTTP transactions for `uni.edu`
 
@@ -550,8 +588,6 @@ log file as HTTP transactions. Using the
 example, Traffic Server generates the log files below:
 
   
-**Log Filename** **Description** 
-
 `squid-uni.edu.log`
 :   All entries for `uni.edu`
 
@@ -570,24 +606,12 @@ that offer even greater control over log
 
 To set log splitting options, follow the steps below:
 
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. Edit the following variables: 
-3. 
-4. **Variable** **Description** 
-`_proxy.config.log.separate_icp_logs_`
-:   Set this variable to 1 to record all ICP transactions in a separate log file.   
-		 Set this variable to 0 to record all ICP transactions in the same log file as HTTP transactions.   
-		 Set this variable to -1 to filter all ICP transactions from the standard log files.
-`_proxy.config.log.separate_host_logs_`
-:   Set this variable to 1 to record HTTP transactions for each host listed in `log_hosts.config` file in a separate log file.   
-		 Set this variable to 0 to record all HTTP transactions (for each host listed in `log_hosts.config`) in the same log file.
-
-5. 
-6. Save and close the `records.config` file. 
-7. Navigate to the Traffic Server `bin` directory. 
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variables
+    * [_`proxy.config.log.separate_icp_logs`_](../configuration-files/records.config#proxy.config.log.separate_icp_logs)
+    * [_`proxy.config.log.separate_host_logs`_](../configuration-files/records.config#proxy.config.log.separate_host_logs)
 8. Run the command `traffic_line -x` to apply the configuration changes. 
 
-### Editing the log_hosts.config File  ### {#Editingloghosts.configFile}
+### Editing the log\_hosts.config File  ### {#Editingloghosts.configFile}
 
 The default `log_hosts.config` file is located in the Traffic Server `config` 
 directory. To record HTTP transactions for different origin servers in separate 
@@ -597,20 +621,20 @@ sports, then Traffic Server records all 
 and `www.foxsports.com` in a log file called `squid-sports.log` (if the Squid 
 format is enabled). 
 
-**Note: **If Traffic Server is clustered and you enable log file collation, 
+**Note:** If Traffic Server is clustered and you enable log file collation, 
 then you should use the same `log_hosts.config` file on every Traffic Server 
 node in the cluster. 
 
-##### To edit the log_hosts.config file follow the steps below:  ##### {#editloghosts.configfilefollowstepsbelow}
+##### To edit the log\_hosts.config file follow the steps below:  ##### {#editloghosts.configfilefollowstepsbelow}
 
-1. In a text editor, open the `log_hosts.config` file located in the Traffic Server `config` directory. 
-2. Enter the hostname of each origin server on a separate line in the file: for example,   
-`webserver1  
-webserver2  
-webserver3`  
+1. In the [`log_hosts.config`](../configuration-files/records.config) file, 
+   enter the hostname of each origin server on a separate line in the file, e.g.:
+
+        :::text
+        webserver1
+        webserver2
+        webserver3
 
-3. Save and close the `log_hosts.config` file. 
-4. Navigate to the Traffic Server `bin` directory. 
 5. Run the command `traffic_line -x` to apply the configuration changes. 
 
 ##  Collating Event Log Files ## {#CollatingEventLogFiles}
@@ -633,19 +657,14 @@ server receives a log buffer from a clie
 as if it was generated locally. For a visual representation of this, see the 
 figure below. 
 
-![](images/logcolat.jpg)
+![Log collation](/images/admin/logcolat.jpg)
 
-> 
->   
-> 
-> _**Log collation **_
-> 
 
 If log clients cannot contact their log collation server, then they write their 
 log buffers to their local disks, into _orphan_ log files. Orphan log files 
 require manual collation. 
 
-**Note: **Log collation can have an impact on network performance. Because 
+**Note:** Log collation can have an impact on network performance. Because 
 all nodes are forwarding their log data buffers to the single collation server, 
 a bottleneck can occur.** **In addition, collated log files contain timestamp 
 information for each entry, but entries in the files do not appear in strict 
@@ -655,36 +674,27 @@ chronological order. You may want to sor
 To configure Traffic Server to collate event log files, you must perform the 
 following tasks: 
 
-* Either [Configure Traffic Server Node to Be a Collation Server](#ConfiguringTrafficEdgeCollationServer) or install & configure a [ Standalone Collator](#UsingStandaloneCollator). 
-* [Configure Traffic Server Nodes to Be a Collation Clients](#ConfiguringTrafficEdgeCollationClient). 
-* Add an attribute to the `LogObject` specification in the `logs_xml.config` file if you are using custom log file formats; refer to [Collating Custom Event Log Files](#CollatingCustomEventLogFiles).   
+* Either [Configure Traffic Server Node to Be a Collation Server](#ConfiguringTSCollationServer) or install & configure a [ Standalone Collator](#UsingStandaloneCollator). 
+* [Configure Traffic Server Nodes to Be a Collation Clients](#ConfiguringTSCollationClient). 
+* Add an attribute to the [`LogObject`](../configuration-files/logs_xml.config#LogObject) specification in the
+  [`logs_xml.config`](../configuration-files/logs_xml.config) file if you are using custom log file formats;
+  refer to [Collating Custom Event Log Files](#CollatingCustomEventLogFiles).   
 
 ### Configuring Traffic Server to Be a Collation Server  ### {#ConfiguringTSBeaCollationServer}
 
 To configure a Traffic Server node to be a collation server, simply edit a 
-configuration file via the steps below. ** **If you modify the `_collation 
-port_` or _`secret`_ after connections between the collation server and collation 
-clients have been established, then you must restart Traffic Server. 
-
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. Edit the following variables: 
-3. 
-4. **Variable** **Description** 
-`_proxy.config.log.collation_mode_`
-:   Set this variable to 1 to set this Traffic Server node as a log collation server. 
-		
-`_proxy.config.log.collation_port_`
-:   Set this variable to specify the port number used for communication with collation 
-		clients. The default port number is 8085.
-`_proxy.config.log.collation_secret_`
-:   Set this variable to specify the password used to validate logging data and prevent the exchange of arbitrary information.  
-		All collation clients must use this same secret.
-
-5. 
-6. Save and close the `records.config` file. 
-7. Navigate to the Traffic Server `bin` directory. 
+configuration file via the steps below.
+
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variables
+    * [_`proxy.config.log.collation_mode`_](../configuration-files/records.config#proxy.config.log.collation_mode) (`1` for server mode)
+    * [_`proxy.config.log.collation_port`_](../configuration-files/records.config#proxy.config.log.collation_port)
+    * [_`proxy.config.log.collation_secret`_](../configuration-files/records.config#proxy.config.log.collation_secret)
 8. Run the command `traffic_line -x` to apply the configuration changes.
 
+**Note:** If you modify the `collation_port` or `secret`
+after connections between the collation server and collation 
+clients have been established, then you must restart Traffic Server.
+
 ### Using a Standalone Collator  ### {#UsingaStandaloneCollator}
 
 If you do not want the log collation server to be a Traffic Server node, then 
@@ -693,58 +703,42 @@ more of its power to collecting, process
 
 ##### To install and configure a standalone collator:  ##### {#installconfigureastandalonecollator}
 
-1. Configure your Traffic Server nodes as log collation clients; refer to [Configuring Traffic Server to Be a Collation Client](#ConfiguringTrafficEdgeCollationClient). 
-2. Copy the `sac` binary from the Traffic Server `bin` directory to the machine serving as the standalone collator. 
-3. Create a directory called `config` in the directory that contains the `sac` binary. 
-4. Create a directory called _`internal`_ in the `config` directory you created in Step 3 (above). This directory is used internally by the standalone collator to store lock files. 
-5. Copy the `records.config` file from a Traffic Server node configured to be a log collation client to the `config` directory you created in Step 3 on the standalone collator.   
- The `records.config` file contains the log collation secret and the port you specified when configuring Traffic Server nodes to be collation clients. The collation port and secret must be the same for all collation clients and servers.
-6. In a text editor, open the `records.config` file on the standalone collator and edit the following variable: 
-7. 
-8. **Variable** **Description** 
-`_proxy.config.log.logfile_dir_`
-:   Set this variable to specify the directory on which you want to store the log files. You can specify an absolute path to the directory or a path relative to the directory from which the `sac` binary is executed.  
-		Note: The directory must already exist on the machine serving as the standalone collator.``
-
-9. 
-10. Save and close the `records.config` file. 
-11. Enter the following command:  
-`sac -c config`
+1. Configure your Traffic Server nodes as log collation clients; refer to [Configuring Traffic Server to Be a Collation Client](#ConfiguringTSCollationClient). 
+2. Copy the `traffic_sac` binary from the Traffic Server `bin` directory and 
+3. Copy the `libtsutil.so` libraries from the Traffic Server `lib` directory to the machine serving as the standalone collator. 
+4. Create a directory called `config` in the directory that contains the `traffic_sac` binary. 
+5. Create a directory called _`internal`_ in the `config` directory you created in Step 4 (above).
+   This directory is used internally by the standalone collator to store lock files. 
+6. Copy the `records.config` file from a Traffic Server node configured to be a log
+   collation client to the `config` directory you created in Step 4 on the standalone collator.   
+     The `records.config` file contains the log collation secret and the port you specified when
+   configuring Traffic Server nodes to be collation clients. The collation port and secret must be the
+   same for all collation clients and servers.
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variable
+    * [_`proxy.config.log.logfile_dir`_](../configuration-files/records.config#proxy.config.log.logfile_dir)
+11. Enter the following command:
+
+        :::text
+        traffic_sac -c config
 
 ### Configuring Traffic Server to Be a Collation Client  ### {#ConfiguringTSBeaCollationClient}
 
 To configure a Traffic Server node to be a collation client, follow the steps 
-below. If you modify the `_collation port_` or _`secret`_ after connections 
+below. If you modify the `collation_port` or `secret` after connections 
 between the collation clients and the collation server have been established, 
 then you must restart Traffic Server. 
 
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. Edit the following variables: 
-3. 
-4. **Variable** **Description** 
-`_proxy.config.log.collation_mode_`
-:   Set this variable to 2 to configure this Traffic Server node to be a log collation client and send standard formatted log entries to the collation server.  
-		To send custom XML-based formatted log entries to the collation server, you must add a log object specification to the `logs_xml.config` file; refer to [Using the Custom Format](#UsingCustomFormat).
-`_proxy.config.log.collation_host_`
-:   Hostname of the collation server.
-`_proxy.config.log.collation_port_`
-:   The port used for communication with the collation server. The default port 
-		number is 8085.
-`_proxy.config.log.collation_secret_`
-:   The password used to validate logging data and prevent the exchange of arbitrary 
-		information.
-`_proxy.config.log.collation_host_tagged_`
-:   Set this variable to 1 if you want the hostname of the collation client that generated the log entry to be included in each entry.  
-		Set this variable to 0 if you do not want the hostname of the collation client that generated the log entry to be included in each entry.
-`_proxy.config.log.max_space_mb_for_orphan_logs_`
-:   Set this variable to specify the maximum amount of space (in megabytes) you 
-		want to allocate to the logging directory on the collation client for storing 
-		orphan log files. Orphan log files are created when the log collation server 
-		cannot be contacted. The default value is 25 MB.
-
-5. 
-6. Save and close the `records.config` file. 
-7. Navigate to the Traffic Server `bin` directory. 
+1. In the [`records.config`](../configuration-files/records.config) file, edit the following variables:
+    * [_`proxy.config.log.collation_mode`_](../configuration-files/records.config#proxy.config.log.collation_mode):
+      `2` to configure this node as log collation client and send standard
+      formatted log entries to the collation server.     
+          For XML-based formatted log entries, see [`logs_xml.config`](../configuration-files/logs_xml.config) file;
+      refer to [Using the Custom Format](#UsingCustomFormat).
+    * [_`proxy.config.log.collation_host`_](../configuration-files/records.config#proxy.config.log.collation_host)
+    * [_`proxy.config.log.collation_port`_](../configuration-files/records.config#proxy.config.log.collation_port)
+    * [_`proxy.config.log.collation_secret`_](../configuration-files/records.config#proxy.config.log.collation_secret)
+    * [_`proxy.config.log.collation_host_tagged`_](../configuration-files/records.config#proxy.config.log.collation_host_tagged)
+    * [_`proxy.config.log.max_space_for_orphan_logs`_](../configuration-files/records.config#proxy.config.log.max_space_for_orphan_logs)
 8. Run the command `traffic_line -x` to apply the configuration changes.
 
 ### Collating Custom Event Log Files  ### {#CollatingCustomEventLogFiles}
@@ -755,17 +749,20 @@ file (in addition to configuring a colla
 
 ##### To collate custom event log files:  ##### {#collatecustomeventlogfiles}
 
-1. On each collation client, open the `logs_xml.config` file in a text editor (located in the Traffic Server `config` directory). 
-2. Add the `CollationHosts` attribute to the `LogObject` specification, as shown below:  
-`<LogObject>  
- <Format = "squid"/>  
- <Filename = "squid"/>  
- <CollationHosts="_ipaddress_:_port_"/>  
- </LogObject>`  
- where _`ipaddress`_ is the hostname or IP address of the collation server to which all log entries (for this object) are forwarded, and _`port`_ is the port number for communication between the collation server and collation clients.   
+1. On each collation client, edit the [`logs_xml.config`](../configuration-files/logs_xml.config)
+2. Add the [`CollationHosts`](../configuration-files/logs_xml.config#LogsXMLObjectCollationHosts)
+   attribute to the [`LogObject`](../configuration-files/logs_xml.config#LogsXMLObjects) specification:
+
+        :::xml
+        <LogObject>
+          <Format = "squid"/>
+          <Filename = "squid"/>
+          <CollationHosts="ipaddress:port"/>
+        </LogObject>
+   where _`ipaddress`_ is the hostname or IP address of the collation server to which all log entries
+   (for this object) are forwarded, and _`port`_ is the port number for communication between the
+   collation server and collation clients.   
 
-3. Save and close the `logs_xml.config` file. 
-4. Navigate to the Traffic Server `bin` directory.
 5. Run the command `traffic_line -L` to restart Traffic Server on the local node or `traffic_line -M` to restart Traffic Server on all the nodes in a cluster.
 
 ## Viewing Logging Statistics ## {#ViewingLoggingStatistics}
@@ -775,17 +772,18 @@ information: 
 
 * How many log files (formats) are currently being written. 
 * The current amount of space used by the logging directory, which contains all event and error logs. 
-* The number of access events written to log files since Traffic Server installation. This counter represents one entry in one file; if multiple formats are being written, then a single event creates multiple event log entries. 
+* The number of access events written to log files since Traffic Server installation. This counter represents
+  one entry in one file; if multiple formats are being written, then a single event creates multiple event log entries. 
 * The number of access events skipped (because they were filtered) since Traffic Server installation. 
 * The number of access events written to the event error log since Traffic Server installation. 
 
 You can retrieve the statistics via the Traffic Line command-line interface; 
-refer to [Monitoring Traffic](monitor.htm). 
+refer to [Monitoring Traffic](../monitoring-traffic). 
 
 ## Viewing Log Files ## {#ViewingLogFiles}
 
 You can view the system, event, and error log files Traffic Server creates. 
-You can also delete a log file or copy it to your local systemif you have the 
+You can also delete a log file or copy it to your local system if you have the 
 correct user permissions. Traffic Server displays only one MB of information 
 in the log file. If the log file you select to view is bigger than 1MB, then 
 Traffic Server truncates the file and displays a warning message indicating 
@@ -801,318 +799,186 @@ Netscape Extended-2. 
 
 The following figure shows a sample log entry in a `squid.log` file. 
 
-![](images/squid_format.jpg)
+![Sample log entry in squid.log](/images/admin/squid_format.jpg)
 
-The following table describes each field. 
+The following list describes each field. 
 
-  
-**Field** **Symbol** **Description** 
 
-1
-:   cqtq:   The client request timestamp in Squid format; the time of the client request 
+`1`
+:   `cqtq`
+:   The client request timestamp in Squid format; the time of the client request 
     in seconds since January 1, 1970 UTC (with millisecond resolution). 
 
-2
-:   ttms:   The time Traffic Server spent processing the client request; the number of 
+`2`
+:   `ttms`
+:   The time Traffic Server spent processing the client request; the number of 
     milliseconds between the time the client established the connection with Traffic 
     Server and the time Traffic Server sent the last byte of the response back 
     to the client.
 
-3
-:   chi:   The IP address of the client’s host machine. 
-
-4
-:   crc/pssc:   The cache result code; how the cache responded to the request: `HIT`, `MISS`, and so on. Cache result codes are described [here](trouble.htm#0_21826).  
+`3`
+:   `chi`
+:   The IP address of the client’s host machine. 
+
+`4`
+:   `crc/pssc`
+:   The cache result code; how the cache responded to the request: `HIT`, `MISS`, and so on.
+     Cache result codes are described [here](XXX).  
      The proxy response status code (the HTTP response status code from Traffic Server to client). 
 
-5
-:   psql:   The length of the Traffic Server response to the client in bytes, including 
-    headers and content.
-
-6
-:   cqhm:   The client request method: `GET`, `POST`, and so on.
-
-7
-:   cquc:   The client request canonical URL; blanks and other characters that might not 
+`5`
+:   `psql`
+:   The length of the Traffic Server response to the client in bytes, including headers and content.
+
+`6`
+:   `cqhm`
+:   The client request method: `GET`, `POST`, and so on.
+
+`7`
+:   `cquc`
+:   The client request canonical URL; blanks and other characters that might not 
     be parsed by log analysis tools are replaced by escape sequences. The escape 
     sequence is a percentage sign followed by the ASCII code number of the replaced 
     character in hex.
 
-8
-:   caun:   The username of the authenticated client. A hyphen (-) means that no authentication 
+`8`
+:   `caun`
+:   The username of the authenticated client. A hyphen (`-`) means that no authentication 
     was required.
 
-9
-:   phr/pqsn:   The proxy hierarchy route; the route Traffic Server used to retrieve the object.  
-     The proxy request server name; the name of the server that fulfilled the request. If the request was a cache hit, then this field contains a hyphen (-).
-
-10
-:   psct:   The proxy response content type; the object content type taken from the Traffic 
+`9`
+:   `phr/pqsn`
+:   The proxy hierarchy route; the route Traffic Server used to retrieve the object.
+
+     The proxy request server name; the name of the server that fulfilled the request.
+     If the request was a cache hit, then this field contains a hyphen (`-`).
+
+`10`
+:   `psct`
+:   The proxy response content type; the object content type taken from the Traffic 
     Server response header.
 
   
 
 ### Netscape Common  ### {#NetscapeCommon}
 
-The following figure shows a sample log entry in a `common.log` file. 
+The following figure shows a sample log entry in a `common.log` file, the list
+following describes the fields of the format.
 
-![](images/netscape_extended_format.jpg)
+![Sample log entry in squid.log](/images/admin/netscape_common_format.jpg)
 
-### Netscape Extended  ### {#NetscapeExtended}
-
-The following figure shows a sample log entry in an `extended.log` file. 
-
-![](images/netscape_extended2_format.jpg)
-
-### Netscape Extended-2  ### {#NetscapeExtended-2}
-
-The following figure shows a sample log entry in an `extended2.log` file. The 
-following table describes each field. 
-
-  
-**Field** **Symbol** **Description** 
-
- 
-:    :   **Netscape Common**
-
-1
-:   chi:   The IP address of the client’s host machine.
-
-2
-:    :   This hyphen (-) is always present in Netscape log entries. 
-
-3
-:   caun:   The authenticated client username. A hyphen (-) means no authentication was 
-    required.
-
-4
-:   cqtd:   The date and time of the client request, enclosed in brackets.
-
-5
-:   cqtx:   The request line, enclosed in quotes.
-
-6
-:   pssc:   The proxy response status code (HTTP reply code).
-
-7
-:   pscl:   The length of the Traffic Server response to the client in bytes.
-
- 
-:    :   **Netscape Extended**
-
-8
-:   sssc:   The origin server response status code.
-
-9
-:   sshl:   The server response transfer length; the body length in the origin server response 
+`8`
+:   `sssc`
+:   The origin server response status code.
+
+`9`
+:   `sshl`
+:   The server response transfer length; the body length in the origin server response 
     to Traffic Server, in bytes.
 
-10
-:   cqbl:   The client request transfer length; the body length in the client request to 
+`10`
+:   `cqbl`
+:   The client request transfer length; the body length in the client request to 
     Traffic Server, in bytes.
 
-11
-:   pqbl:   The proxy request transfer length; the body length in the Traffic Server request 
+`11`
+:   `pqbl`
+:   The proxy request transfer length; the body length in the Traffic Server request 
     to the origin server. 
 
-12
-:   cqhl:   The client request header length; the header length in the client request to 
+`12`
+:   `cqhl`
+:   The client request header length; the header length in the client request to 
     Traffic Server. 
 
-13
-:   pshl:   The proxy response header length; the header length in the Traffic Server response 
+`13`
+:   `pshl`
+:   The proxy response header length; the header length in the Traffic Server response 
     to the client.
 
-14
-:   pqhl:   The proxy request header length; the header length in Traffic Server request 
+`14`
+:   `pqhl`
+:   The proxy request header length; the header length in Traffic Server request 
     to the origin server. 
 
-15
-:   sshl:   The server response header length; the header length in the origin server response 
+`15`
+:   `sshl`
+:   The server response header length; the header length in the origin server response 
     to Traffic Server. 
 
-16
-:   tts:   The time Traffic Server spent processing the client request; the number of 
+`16`
+:   `tts`
+:   The time Traffic Server spent processing the client request; the number of 
     seconds between the time that the client established the connection with Traffic 
     Server and the time that Traffic Server sent the last byte of the response 
     back to the client.
 
- 
-:    :   **Netscape Extended2**
-
-17
-:   phr:   The proxy hierarchy route; the route Traffic Server used to retrieve the object. 
-    
-
-18
-:   cfsc:   The client finish status code: `FIN` if the client request completed successfully 
-    or `INTR` if the client request was interrupted.
-
-19
-:   pfsc:   The proxy finish status code: `FIN` if the Traffic Server request to the origin 
-    server completed successfully or `INTR` if the request was interrupted.
-
-20
-:   crc:   The cache result code; how the Traffic Server cache responded to the request: 
-    HIT, MISS, and so on. Cache result codes are described [here](trouble.htm#0_21826). 
-    
-
- 
-
-## Support for Traditional Custom Logging ## {#SupportforTraditionalCustomLogging}
 
-Traffic Server supports traditional custom logging in addition to the XML-based 
-custom logging, which is more versatile and therefore recommended.
-
-Traffic Server's format converter only converts traditional log configuration 
-files named `logs.config`. If you are using a traditional log configuration 
-file with a name other than `logs.config`, then you must convert the file yourself 
-after installation; refer to [Using cust_log_fmt_cnvrt](). If you opt to use 
-traditional custom logging instead of the more versatile XML-based custom logging, 
-then you must enable the traditional custom logging option manually. Furthermore, 
-if you want to configure Traffic Server as a collation client that sends log 
-entries in traditional custom formats, then you must set collation options 
-manually. Use the following procedures. 
-
-### Enabling Traditional Custom Logging  ### {#EnablingTraditionalCustomLogging}
-
-To enable custom logging, you must edit a configuration file manually. To edit 
-your existing traditional custom log formats, modify the [logs.config](files.htm#logs.config) 
-file as before.
-
-##### To enable traditional custom logging:  ##### {#enabletraditionalcustomlogging}
-
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. Edit the following variables: 
-3. 
-4. **Variable** **Description** 
-`_proxy.config.log.custom_logs_enabled _`
-:   Set this variable to 1 to enable custom logging.
-6. Save and close the `records.config` file. 
-7. Navigate to the Traffic Server `bin` directory. 
-8. Run the command `traffic_line -x` to apply the configuration changes.
-
-To configure your Traffic Server node to be a collation client and send traditional 
-custom log files to the collation server, use the following procedure. 
-
-##### To configure Traffic Server as a collation client:  ##### {#configureTSasacollationclient}
+### Netscape Extended  ### {#NetscapeExtended}
 
-1. In a text editor, open the `records.config` file located in the `config` directory. 
-2. Edit the following variables: 
-3. 
-4. **Variable** **Description** 
-`_proxy.config.log.collation_mode_`
-:   Set this variable to 3 to configure this Traffic Server node to be a log collation client and send log entries in traditional custom formats to the collation server.  
-		 Set this variable to 4 to configure this Traffic Server node to be a log collation client and send log entries in both standard formats (Squid, Netscape) and traditional custom formats to the collation server.
-`_proxy.config.log.collation_host_`
-:   Specify the hostname of the collation server.
-`_proxy.config.log.collation_port_`
-:   Specify the port Traffic Server uses to communicate with the collation server. 
-		The default port number is 8085.
-`_proxy.config.log.collation_secret_`
-:   Specify the password used to validate logging data and prevent exchange of 
-		arbitrary information.
-`_proxy.config.log.collation_host_tagged_`
-:   Set this variable to 1 if you want the hostname of the collation client that generated the log entry to be included in each entry.  
-		Set this variable to 0 if you do not want the hostname of the collation client that generated the log entry to be included in each entry. 
-
-5. 
-6. Save and close the `records.config` file. 
-7. Navigate to the Traffic Server `bin` directory. 
-8. Run the command `traffic_line -x` to apply the configuration changes. 
+The following figure shows a sample log entry in an `extended.log` file. 
 
-### Using cust_log_fmt_cnvrt  ### {#Usingcustlogfmtcnvrt}
+![sample log entry in common.log](/images/admin/netscape_extended_format.jpg)
 
-The format converter `cust_log_fmt_cnvrt` converts your traditional custom 
-log configuration file (`logs.config`) to an XML-based custom log configuration 
-file (`logs_xml.config`). This enables you to use XML-based custom logging. 
- 
+  
+`1`
+:   `chi`
+:   The IP address of the client’s host machine.
 
-##### To run the format converter:  ##### {#runformatconverter}
+`2`
+:   `-`
+:   This hyphen (`-`) is always present in Netscape log entries. 
 
-1. Navigate to the Traffic Server `bin` directory. 
-2. Enter the command `cust_log_fmt_cnvrt` and include the options you want to use.   
- The format of the command is   
-`cust_log_fmt_cnvrt [-o output_file | -a] [-hnVw] [input_file..]`  
-  
- The following table describes the command-line options. 
-3. 
-4. **Option** **Description** 
-`-o _output_file_`
-:   Specifies the name of the output file; you can specify one output file only. If you specify multiple input files, then the converter combines the converted output from all files into a single output file.   
-		 This option and the** `-a`** option are mutually exclusive. If you want to create multiple output files from multiple input files, then you must use the** `-a`** option. If you do not specify an output file (using the `**-o**` or `**-a**` options), then output goes to `stdout`.
-`-a`
-:   Generates one output file for each input file. The format converter automatically creates the name of the output file from the name of the input file by replacing `.config` at the end of the filename with `_xml.config`.  
-		**Note:** If the source filename does not contain a `.config` extension, then the converter creates the new filename by appending `_xml.config` to the source filename.
-`-h`
-:   Displays a description of the `cust_log_fmt_cnvrt` options.
-`-n`
-:   Annotates the output file(s) with comments about the success or failure of 
-		the translation process for each of the input lines. This option produces a 
-		comment at the beginning of the output file(s) that describes errors the format 
-		converter encountered while converting the file. The comment includes line 
-		number, input line type (format, filter, or unknown), and either a success 
-		status or a description of the error encountered.
-`-V`
-:   Displays the version of the format converter you are running.
-`-w`
-:   Overwrites existing output files without warning. If you do not specify the**` 
-		-w`** option, then the format converter does not overwrite existing output 
-		files. If you specify an output file that already exists, then the converter 
-		does not convert the input file.
-_`input file`_
-:   Specifies the name of the input file. If you do not specify an input filename, 
-		then the format converter takes the input from `stdin`.
+`3`
+:   `caun`
+:   The authenticated client username. A hyphen (`-`) means no authentication was 
+    required.
 
-5. 
+`4`
+:   `cqtd`
+:   The date and time of the client request, enclosed in brackets.
+
+`5`
+:   `cqtx`
+:   The request line, enclosed in quotes.
+
+`6`
+:   `pssc`
+:   The proxy response status code (HTTP reply code).
+
+`7`
+:   `pscl`
+:   The length of the Traffic Server response to the client in bytes.
 
-#### Examples  #### {#Examples}
+ 
+ 
+### Netscape Extended-2  ### {#NetscapeExtended-2}
 
-The following example converts the file `logs.config` and sends the results to `stdout`:   
-`cust_log_fmt_cnvrt logs.config`   
-  
- The following example converts a `logs.config` file into a `logs_xml.config` file and annotates the output file (`logs_xml.config`) with comments about the success or failure of the translation process. If a file named `logs_xml.config` already exists, then the format converter overwrites it.   
-`cust_log_fmt_cnvrt -o logs_xml.config -n -w logs.config`  
-  
-The following example converts the files `x.config`, `y.config`, and `z.config` into three separate output files called `x_xml.config`, `y_xml.config`, and `z_xml.config`:   
-`cust_log_fmt_cnvrt -a x.config y.config z.config `   
+The following figure shows a sample log entry in an `extended2.log` file. The 
+following list describes each field. 
 
-         
+![sample log entry in extended.log](/images/admin/netscape_extended2_format.jpg)
 
-      
 
-   
+`17`
+:   `phr`
+:   The proxy hierarchy route; the route Traffic Server used to retrieve the object. 
+    
 
-   
-
-         
-
-* [Overview](intro.htm)
-* [Getting Started](getstart.htm)
-* [HTTP Proxy Caching ](http.htm)
-* [Explicit Proxy Caching](explicit.htm)
-* [Reverse Proxy and HTTP Redirects](reverse.htm)
-* [Hierarchical Caching](hier.htm)
-* [Configuring the Cache](cache.htm)
-* [Monitoring Traffic](monitor.htm)
-* [Configuring Traffic Server](configure.htm)
-* [Security Options](secure.htm)
-* [Working with Log Files](log.htm)
-* [Traffic Line Commands](cli.htm)
-* [Event Logging Formats](logfmts.htm)
-* [Configuration Files](files.htm) 
-* [Traffic Server Error Messages](errors.htm)
-* [FAQ and Troubleshooting Tips](trouble.htm)
-* [Traffic Server 管理员指南](ts_admin_chinese.pdf) (PDF)
-
-   
-
-   
-
- Copyright © 2011 [The Apache Software Foundation](http://www.apache.org/). 
-Licensed under the [Apache License](http://www.apache.org/licenses/), Version 
-2.0. Apache Traffic Server, Apache, the Apache Traffic Server logo, and the 
-Apache feather logo are trademarks of The Apache Software Foundation.
+`18`
+:   `cfsc`
+:   The client finish status code: `FIN` if the client request completed successfully 
+    or `INTR` if the client request was interrupted.
 
+`19`
+:   `pfsc`
+:   The proxy finish status code: `FIN` if the Traffic Server request to the origin 
+    server completed successfully or `INTR` if the request was interrupted.
 
+`20`
+:   `crc`
+:   The cache result code; how the Traffic Server cache responded to the request: 
+    HIT, MISS, and so on. Cache result codes are described [here](XXX). 
+    
 
+Â