You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@trafficserver.apache.org by bu...@apache.org on 2011/03/07 22:58:15 UTC

svn commit: r786638 - in /websites/staging/trafficserver/trunk/content/docs/trunk/admin/working-log-files: ./ index.en.html

Author: buildbot
Date: Mon Mar  7 21:58:14 2011
New Revision: 786638

Log:
Staging update by buildbot

Added:
    websites/staging/trafficserver/trunk/content/docs/trunk/admin/working-log-files/
    websites/staging/trafficserver/trunk/content/docs/trunk/admin/working-log-files/index.en.html

Added: websites/staging/trafficserver/trunk/content/docs/trunk/admin/working-log-files/index.en.html
==============================================================================
--- websites/staging/trafficserver/trunk/content/docs/trunk/admin/working-log-files/index.en.html (added)
+++ websites/staging/trafficserver/trunk/content/docs/trunk/admin/working-log-files/index.en.html Mon Mar  7 21:58:14 2011
@@ -0,0 +1,1106 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml"
+	xml:lang="en" lang="en">
+  <head>
+    
+    <link rel="stylesheet" href="/styles/pygments_style.css" />
+    <link rel="stylesheet" href="/styles/ts.css" />
+    
+    
+    <title></title>
+    <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements.  See the NOTICE file distributed with this work for additional information regarding copyright ownership.  The ASF licenses this file to you under the Apache License, Version 2.0 (the &quot;License&quot;); you may not use this file except in compliance with the License.  You may obtain a copy of the License at . http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an &quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and limitations under the License. -->
+  </head>
+
+  <body>
+    <div id="header" class="header">
+      <table><tr>
+      <td><a href="http://trafficserver.apache.org/"><img class="logo" alt="Apache Traffic Server" src="http://trafficserver.apache.org/images/trans_logo_350x69.png" /></a></td>
+      <td>
+        
+          <span class="doc-title">Adminstrator&#39;s Guide</span><br />
+        
+        <span class="title"></span>
+      </td>
+      </tr></table>
+    </div><!-- header -->
+    
+      <div class="nav">
+      
+      </div>
+    
+
+  <div class="main">
+    <div id="content">
+      <h1 id="WorkingwithLogFiles">Working with Log Files</h1>
+<p>Traffic Server generates log files that contain information about every request 
+it receives and every error it detects.</p>
+<p>This chapter discusses the following topics: </p>
+<ul>
+<li><a href="#UnderstandingTrafficEdgeLogFiles">Understanding Traffic Server Log Files</a></li>
+<li><a href="#UnderstandingEventLogFiles">Understanding Event Log Files</a></li>
+<li><a href="#ManagingEventLogFiles">Managing Event Log Files</a></li>
+<li><a href="#ChoosingEventLogFileFormats">Choosing Event Log File Formats</a></li>
+<li><a href="#RollingEventLogFiles">Rolling Event Log Files</a></li>
+<li><a href="#SplittingEventLogFiles">Splitting Event Log Files</a></li>
+<li><a href="#CollatingEventLogFiles">Collating Event Log Files</a></li>
+<li><a href="#ViewingLoggingStatistics">Viewing Logging Statistics</a></li>
+<li><a href="#ViewingLogFiles">Viewing Log Files</a></li>
+<li><a href="#ExampleEventLogFileEntries">Example Event Log File Entries</a></li>
+<li><a href="#SupportTraditionalCustomLogging">Support for Traditional Custom Logging</a></li>
+</ul>
+<h2 id="UnderstandingTSLogFiles">Understanding Traffic Server Log Files</h2>
+<p>Traffic Server records information about every transaction (or request) it 
+processes and every error it detects in log files. Traffic Server keeps three 
+types of log files: </p>
+<ul>
+<li><strong>Error log files</strong> record information about why a particular transaction was in error. </li>
+<li><strong>Event log files</strong> (also called <strong>access log files</strong>) record information about the state of each transaction Traffic Server processes. </li>
+<li><strong>System log files</strong> record system information, including messages about the state of Traffic Server and errors/warnings it produces. This kind of information might include a note that event log files were rolled, a warning that cluster communication timed out, or an error indicating that Traffic Server was restarted. <br />
+ All system information messages are logged with the system-wide logging facility <strong><code>syslog</code></strong> under the daemon facility. The <code>syslog.conf</code> configuration file (stored in the <code>/etc</code> directory) specifies where these messages are logged. A typical location is <code>/var/log/messages</code> (Linux). <br />
+ The <code>syslog</code> process works on a system-wide basis, so it serves as the single repository for messages from all Traffic Server processes (including <code>traffic_server</code>, <code>traffic_manager</code>, and <code>traffic_cop</code>). <br />
+ System information logs observe a static format. Each log entry in the log contains information about the date and time the error was logged, the hostname of the Traffic Server that reported the error, and a description of the error or warning. <br />
+ Refer to <a href="errors.htm">Traffic Server Error Messages</a> for a list of the messages logged by Traffic Server. </li>
+</ul>
+<p>By default, Traffic Server creates both error and event log files and records 
+system information in system log files. You can disable event logging and/or 
+error logging by setting the configuration variable <code>_proxy.config.log.logging_enabled_</code> 
+(in the <code>records.config</code> file) to one of the following values: </p>
+<ul>
+<li><code>0</code> to disable both event and error logging </li>
+<li><code>1</code> to enable error logging only </li>
+<li><code>2</code> to enable transaction logging only </li>
+<li><code>3</code> to enable both transaction and error logging</li>
+</ul>
+<h2 id="UnderstandingEventLogFiles">Understanding Event Log Files</h2>
+<p>Event log files record information about every request that Traffic Server 
+processes. By analyzing the log files, you can determine how many people use 
+the Traffic Server cache, how much information each person requested, what 
+pages are most popular, and so on. Traffic Server supports several standard 
+log file formats, such as Squid and Netscape, as well as user-defined custom 
+formats. You can analyze the standard format log files with off-the-shelf analysis 
+packages. To help with log file analysis, you can separate log files so they 
+contain information specific to protocol or hosts. You can also configure Traffic 
+Server to roll log files automatically at specific intervals during the day 
+or when they reach a certain size.</p>
+<p>The following sections describe the Traffic Server logging system features 
+and discuss how to:</p>
+<ul>
+<li><strong>Manage your event log files</strong><br />
+ You can choose a central location for storing log files, set how much disk space to use for log files, and set how and when to roll log files. Refer to <a href="#ManagingEventLogFiles">Managing Event Log Files</a>. </li>
+<li>
+<p><strong>Choose different event log file formats </strong><br />
+ You can choose which standard log file formats you want to use for traffic analysis, such as Squid or Netscape. Alternatively, you can use the Traffic Server custom format, which is XML-based and enables you to institute more control over the type of information recorded in log files. Refer to <a href="#ChoosingEventLogFileFormats">Choosing Event Log File Formats</a>. <br />
+</p>
+</li>
+<li>
+<p><strong>Roll event log files automatically</strong><br />
+ Configure Traffic Server to roll event log files at specific intervals during the day or when they reach a certain size; this enables you to identify and manipulate log files that are no longer active. Refer to <a href="#RollingEventLogFiles">Rolling Event Log Files</a>. </p>
+</li>
+<li><strong>Separate log files according to protocols and hosts</strong><br />
+ Configure Traffic Server to create separate log files for different protocols. You can also configure Traffic Server to generate separate log files for requests served by different hosts. Refer to <a href="#SplittingEventLogFiles">Splitting Event Log Files</a>. </li>
+<li><strong>Collate log files from different Traffic Server nodes</strong><br />
+ Designate one or more nodes on the network to serve as log collation servers. These servers, which might be standalone or part of Traffic Server, enable you to keep all logged information in well-defined locations. Refer to <a href="#CollatingEventLogFiles">Collating Event Log Files</a>. </li>
+<li><strong>View statistics about the logging system</strong><br />
+ Traffic Server provides statistics about the logging system; you can access these statistics via Traffic Line. Refer to <a href="#ViewingLoggingStatistics">Viewing Logging Statistics</a>. </li>
+<li><strong>Interpret log file entries for the log file formats</strong><br />
+ Refer to <a href="#ExampleEventLogFileEntries">Example Event Log File Entries</a>. </li>
+</ul>
+<h2 id="ManagingEventLogFiles">Managing Event Log Files</h2>
+<p>Traffic Server enables you to control where event log files are located and 
+how much space they can consume. Additionally you can specify how to handle 
+low disk space in the logging directory. </p>
+<h3 id="ChoosingLoggingDirectory">Choosing the Logging Directory</h3>
+<p>By default, Traffic Server writes all event log files in the <code>logs</code> directory 
+located in the directory where you installed Traffic Server. To use a different 
+directory, refer to <a href="#SettingLogFileManagementOptions">Setting Log File Management Options</a>. </p>
+<h3 id="ControllingLoggingSpace">Controlling Logging Space</h3>
+<p>Traffic Server enables you to control the amount of disk space that the logging 
+directory can consume. This allows the system to operate smoothly within a 
+specified space window for a long period of time. After you establish a space 
+limit, Traffic Server continues to monitor the space in the logging directory. 
+When the free space dwindles to the headroom limit (see <a href="#SettingLogFileManagementOptions">Setting Log File Management 
+Options</a>), it enters a low space state and 
+takes the following actions: </p>
+<ul>
+<li>If the autodelete option (discussed in <a href="#RollingEventLogFiles">Rolling Event Log Files</a>) is <em>enabled</em>, then Traffic Server identifies previously-rolled log files (i.e., log files with the <code>.old</code> extension). It starts deleting files one by one, beginning with the oldest file, until it emerges from the low state. Traffic Server logs a record of all deleted files in the system error log. </li>
+<li>If the autodelete option is <em>disabled</em> or there are not enough old log files to delete for the system to emerge from its low space state, then Traffic Server issues a warning and continues logging until space is exhausted. When avilable space is consumed, event logging stops. Traffic Server resumes event logging when enough space becomes available for it to exit the low space state. To make space available, either explicitly increase the logging space limit or remove files from the logging directory manually. </li>
+</ul>
+<p>You can run a <code>cron</code> script in conjunction with Traffic Server to automatically 
+remove old log files from the logging directory before Traffic Server enters 
+the low space state. Relocate the old log files to a temporary partition, where 
+you can run a variety of log analysis scripts. Following analysis, either compress 
+the logs and move to an archive location, or simply delete them. </p>
+<h3 id="SettingLogFileManagementOptions">Setting Log File Management Options</h3>
+<p>To set log management options, follow the steps below:</p>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><em><code>proxy.config.log.logfile_dir</code></em></dt>
+<dd>Specify the path to the directory in which you want to store event log files. This can be an absolute path or a path relative to the directory in which Traffic Server is installed. The default is <code>logs</code> located in the Traffic Server installation directory.<br />
+<strong>Note:</strong> The directory you specify must already exist.</dd>
+<dt><em><code>proxy.config.log.max_space_mb_for_logs</code></em></dt>
+<dd>Enter the maximum amount of space you want to allocate to the logging directory. The default value is 2000 MB.<br />
+<strong>Note:</strong> All files in the logging directory contribute to the space used, even if they are not log files.</dd>
+<dt><em><code>proxy.config.log.max_space_mb_headroom</code></em></dt>
+<dd>Enter the tolerance for the log space limit. The default value is 10 MB.</dd>
+</dl>
+</li>
+<li>
+<p>Save and close the <code>records.config</code> file. </p>
+</li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<h2 id="ChoosingEventLogFileFormats">Choosing Event Log File Formats</h2>
+<p>Traffic Server supports the following log file formats: </p>
+<ul>
+<li>Standard formats, such as Squid or Netscape; refer to <a href="#UsingStandardFormats">Using Standard Formats</a>. </li>
+<li>The Traffic Server custom format; refer to <a href="#UsingCustomFormat">Using the Custom Format</a>. </li>
+</ul>
+<p>In addition to the standard and custom log file format, you can choose whether to save log files in binary or ASCII; refer to <a href="#ChoosingBinaryASCII">Choosing Binary or ASCII</a>. <br />
+ Event log files consume substantial disk space. Creating log entries in multiple formats at the same time can consume disk resources very quickly and adversely impact Traffic Server performance. </p>
+<h3 id="UsingStandardFormats">Using Standard Formats</h3>
+<p>The standard log formats include Squid, Netscape Common, Netscape extended, 
+and Netscape Extended-2. The standard log file formats can be analyzed with 
+a wide variety of off-the-shelf log-analysis packages. You should use one of 
+the standard event log formats unless you need information that these formats 
+do not provide. Refer to <a href="#UsingCustomFormat">Using the Custom Format</a>.</p>
+<h4 id="SettingStandardLogFileFormatOptions">Setting Standard Log File Format Options</h4>
+<p>Set standard log file format options by following the steps below:</p>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>To use the Squid format, edit the following variables:</li>
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.squid_log_enabled_</code></dt>
+<dd>Set this variable to 1 to enable the Squid log file format.</dd>
+<dt><code>_proxy.config.log.squid_log_is_ascii_</code></dt>
+<dd>Set this variable to 1 to enable ASCII mode.<br />
+    Set this variable to 0 to enable binary mode.</dd>
+<dt><code>_proxy.config.log.squid_log_name_</code></dt>
+<dd>Enter the name you want to use for Squid event log files. The default is <code>squid</code>. </dd>
+</dl>
+</li>
+</ol>
+<dl>
+<dt><code>_proxy.config.log.squid_log_header_</code></dt>
+<dd>Enter the header text you want to display at the top of the Squid log files. 
+    Enter <code>NULL</code> if you do not want to use a header. </dd>
+</dl>
+<ol>
+<li>To use the Netscape Common format, edit the following variables: </li>
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.common_log_enabled_</code></dt>
+<dd>Set this variable to 1 to enable the Netscape Common log file format.</dd>
+<dt><code>proxy.config.log.common_log_is_ascii</code></dt>
+<dd>Set this variable to 1 to enable ASCII mode.<br />
+    Set this variable to 0 to enable binary mode.</dd>
+<dt><code>_proxy.config.log.common_log_name_</code></dt>
+<dd>Enter the name you want to use for Netscape Common event log files. The default 
+    is <code>common</code>.</dd>
+<dt><code>_proxy.config.log.common_log_header_</code></dt>
+<dd>Enter the header text you want to display at the top of the Netscape Common 
+    log files. Enter <code>NULL</code> if you do not want to use a header.</dd>
+</dl>
+</li>
+<li>
+<p>To use the Netscape Extended format, edit the following variables: </p>
+</li>
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.extended_log_enabled_</code></dt>
+<dd>Set this variable to 1 to enable the Netscape Extended log file format.</dd>
+<dt><code>_proxy.config.log.extended_log_is_ascii_</code></dt>
+<dd>Set this variable to 1 to enable ASCII mode.<br />
+    Set this variable to 0 to enable binary mode.</dd>
+<dt><code>_proxy.config.log.extended_log_name_</code></dt>
+<dd>Enter the name you want to use for Netscape Extended event log files. The default 
+    is <code>extended</code>.</dd>
+<dt><code>_proxy.config.log.extended_log_header_</code></dt>
+<dd>Enter the header text you want to display at the top of the Netscape Extended 
+    log files. Enter <code>NULL</code> if you do not want to use a header.</dd>
+</dl>
+</li>
+<li>
+<p>To use the Netscape Extended-2 format, edit the following variables:</p>
+</li>
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.extended2_log_enabled_</code></dt>
+<dd>Set this variable to 1 to enable the Netscape Extended-2 log file format.</dd>
+<dt><code>_proxy.config.log.extended2_log_is_ascii_</code></dt>
+<dd>Set this variable to 1 to enable ASCII mode.<br />
+    Set this variable to 0 to enable binary mode.</dd>
+<dt><code>_proxy.config.log.extended2_log_name_</code></dt>
+<dd>Enter the name you want to use for Netscape Extended-2 event log files. The 
+    default is <code>extended2</code>.</dd>
+<dt><code>_proxy.config.log.extended2_log_header_</code></dt>
+<dd>Enter the header text you want to display at the top of the Netscape Extended-2 
+    log files. Enter <code>NULL</code> if you do not want to use a header.</dd>
+</dl>
+</li>
+<li>
+<p>Save and close the <code>records.config</code> file. </p>
+</li>
+<li>Navigate to the Traffic Server <code>bin</code> directory.</li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h3 id="UsingCustomFormat">Using the Custom Format</h3>
+<p>The XML-based custom log format is more flexible then the standard log file 
+formats and gives you more control over the type of information recorded in 
+log files. You should create a custom log format if you need data for analysis 
+that's not available in the standard formats. You can decide what information 
+to record for each Traffic Server transaction and create filters that specify 
+which transactions to log. </p>
+<p>The heart of the XML-based custom logging feature is the XML-based logging 
+configuration file (<code>logs_xml.config</code>) that enables you to create very modular 
+descriptions of logging objects. The <code>logs_xml.config</code> file uses three types 
+of objects to create custom log files, as detailed below. To generate a custom 
+log format, you must specify at least one <code>LogObject</code> definition (one log file 
+is produced for each <code>LogObject</code> definition). </p>
+<ul>
+<li>The <code>**LogFormat**</code> object defines the content of the log file using printf-style format strings. </li>
+<li>The <code>**LogFilter**</code> object defines a filter so that you include or exclude certain information from the log file. </li>
+<li>The <code>**LogObject**</code> object specifies all the information needed to produce a log file. Items marked with an asterisk (*) are required.</li>
+<li>*The name of the log file. 
+  *The format to be used. This can be a standard format (Squid or Netscape) or 
+a previously-defined custom format (i.e., a previously-defined <code>LogFormat</code> 
+object). 
+  The file mode: <code>ASCII</code>, <code>Binary</code>, or <code>ASCII_PIPE</code>. The default is <code>ASCII</code>. <br />
+ The <code>ASCII_PIPE</code> mode writes log entries to a UNIX-named pipe (a buffer in memory); other processes can then read the data using standard I/O functions. The advantage of this option is that Traffic Server does not have to write to disk, which frees disk space and bandwidth for other tasks. When the buffer is full, Traffic Server drops log entries and issues an error message indicating how many entries were dropped. Because Traffic Server only writes complete log entries to the pipe, only full records are dropped. 
+  Any filters you want to use (i.e., previously-defined <code>LogFilter</code> objects). </li>
+</ul>
+<p>The collation servers that are to receive the log files. 
+  The protocols you want to log. If the protocols tag is used, then Traffic Server 
+will only log transactions from the protocols listed; otherwise, all transactions 
+for all protocols are logged. 
+  The origin servers you want to log. If the <code>servers</code> tag is used, then Traffic 
+Server will only log transactions for the origin servers listed; otherwise, 
+transactions for all origin servers are logged. 
+  The header text you want the log files to contain. The header text appears 
+at the beginning of the log file, just before the first record. 
+  The log file rolling options.</p>
+<h5 id="generateacustomlogformat">To generate a custom log format:</h5>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the Traffic Server <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.custom_logs_enabled_</code></dt>
+<dd>Set this variable to 1 to enable custom logging.</dd>
+<dt><code>_proxy.config.log.xml_logs_config_</code></dt>
+<dd>Make sure this variable is set to 1 (the default value).</dd>
+</dl>
+</li>
+<li>
+<p>Save and close the <code>records.config</code> file. </p>
+</li>
+<li>Open the <code>logs_xml.config</code> file located in the Traffic Server <code>config</code> directory. </li>
+<li>Add <code>LogFormat</code>, <code>LogFilter</code>, and <code>LogObject</code> specifications to the configuration file. For detailed information about this file, see <a href="files.htm#logs_xml.config">logs_xml.config</a>.</li>
+<li>Save and close the <code>log``s_xml.config</code> file. </li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply your configuration changes. </li>
+</ol>
+<h4 id="CreatingSummaryLogFiles">Creating Summary Log Files</h4>
+<p>Traffic Server performs several hundred operations per second; therefore, event 
+log files can quickly grow to large sizes. Using SQL-like aggregate operators, 
+you can configure Traffic Server to create summary log files that summarize 
+a set of log entries over a specified period of time. This can significantly 
+reduce the size of the log files generated. </p>
+<p>To generate a summary log file, create a <code>LogFormat</code> object in the XML-based 
+logging configuration file (<code>logs_xml.config</code>) using the SQL-like aggregate 
+operators below. You can apply each of these operators to specific fields, 
+over a specified interval.</p>
+<ul>
+<li><code>COUNT</code></li>
+<li><code>SUM</code></li>
+<li><code>AVERAGE</code></li>
+<li><code>FIRST</code></li>
+<li><code>LAST</code></li>
+</ul>
+<h5 id="createasummarylogfileformat">To create a summary log file format:</h5>
+<ol>
+<li>Access the <code>logs_xml.config</code> file located in the Traffic Server <code>config</code> directory. </li>
+<li>Define the format of the log file as follows:<br />
+<code>&lt;LogFormat&gt;  
+ &lt;Name = "summary"/&gt;  
+ &lt;Format = "%&lt;_operator_(_field_)&gt; : %&lt;_operator_(_field_)&gt;"/&gt;  
+ &lt;Interval = "_n_"/&gt;  
+ &lt;/Format&gt;</code> where <em><code>operator</code></em> is one of the five aggregate operators (<code>COUNT</code>, <code>SUM</code>, <code>AVERAGE</code>, <code>FIRST</code>, <code>LAST</code>), <em><code>field</code></em> is the logging field you want to aggregate, and <code>_n_</code> is the interval (in seconds) between summary log entries. You can specify more than one <code>_operator_</code> in the format line. For more information, refer to <a href="files.htm#logs_xml.config">logs_xml.config</a>.<br />
+</li>
+</ol>
+<p>The following example format generates one entry every 10 seconds. Each entry contains the timestamp of the last entry of the interval, a count of the number of entries seen within that 10-second interval, and the sum of all bytes sent to the client: <br />
+<code>&lt;LogFormat&gt;  
+ &lt;Name = "summary"/&gt;  
+ &lt;Format = "%&lt;LAST(cqts)&gt; : %&lt;COUNT(*)&gt; : %&lt;SUM(psql)&gt;"/&gt;  
+ &lt;Interval = "10"/&gt;  
+ &lt;/Format&gt;</code><br />
+</p>
+<p><strong>IMPORTANT: </strong>You cannot create a format specification that contains both aggregate operators and regular fields. For example, the following specification would be invalid: <br />
+<code>&lt;Format = "%&lt;LAST(cqts)&gt; : %&lt;COUNT(*)&gt; : %&lt;SUM(psql)&gt; : %&lt;cqu&gt;"/&gt;</code><br />
+</p>
+<ol>
+<li>Define a <code>LogObject</code> that uses this format. </li>
+<li>Save your changes and close the <code>logs_xml.config</code> file. Run the command <code>traffic_line -x</code> from the Traffic Server <code>bin</code> directory to apply configuration changes . </li>
+</ol>
+<h3 id="ChoosingBinaryASCII">Choosing Binary or ASCII</h3>
+<p>You can configure the Traffic Server to create event log files in either of 
+the following: </p>
+<ul>
+<li><strong>ASCII</strong><br />
+ These files are human-readable and can be processed using standard, off-the-shelf log analysis tools. However, Traffic Server must perform additional processing to create the files in ASCII, which mildly impacts system overhead. ASCII files also tend to be larger than the equivalent binary files. By default, ASCII log files have a <code>.log</code> filename extension. </li>
+<li><strong>Binary</strong><br />
+ These files generate lower system overhead and generally occupy less space on the disk than ASCII files (depending on the type of information being logged). However, you must use a converter application before you can read or analyze binary files via standard tools. By default, binary log files use a <code>.blog</code> filename extension. </li>
+</ul>
+<p>While binary log files typically require less disk space, there are exceptions. <br />
+ For example: the value <code>0</code> (zero) requires only one byte to store in ASCII, but requires four bytes when stored as a binary integer. Conversely: if you define a custom format that logs IP addresses, then a binary log file would only require four bytes of storage per 32-bit address. However, the same IP address stored in dot notation would require around 15 characters (bytes) in an ASCII log file. Therefore, it's wise to consider the type of data that will be logged before you select ASCII or binary for your log files. For example, you might try logging for one day using ASCII and then another day using binary. If the number of requests is roughly the same for both days, then you can calculate a rough metric that compares the two formats. </p>
+<p>For standard log formats, select Binary or ASCII (refer to <a href="#SettingStandardLogFileFormatOptions">Setting Standard 
+Log File Format Options</a>). For the custom 
+log format, specify ASCII or Binary mode in the <code>LogObject</code> (refer to <a href="#UsingCustomFormat">Using 
+the Custom Format</a>). In addition to the ASCII and binary 
+options, you can also write custom log entries to a UNIX-named pipe (i.e., 
+a buffer in memory). Other processes can then read the data using standard 
+I/O functions. The advantage of using this option is that Traffic Server does 
+not have to write to disk, which frees disk space and bandwidth for other tasks. 
+In addition, writing to a pipe does not stop when logging space is exhausted 
+because the pipe does not use disk space. Refer to <a href="files.htm#logs_xml.config">logs_xml.config</a> 
+for more information about the <code>ASCII_PIPE</code> option.</p>
+<h3 id="UsinglogcatConvertBinaryLogsASCII">Using logcat to Convert Binary Logs to ASCII</h3>
+<p>You must convert a binary log file to ASCII before you can analyze it using 
+standard tools. </p>
+<h5 id="convertabinarylogfileASCII">To convert a binary log file to ASCII:</h5>
+<ol>
+<li>Navigate to the directory that contains the binary log file. </li>
+<li>Make sure that the <code>logcat</code> utility is in your path. </li>
+<li>Enter the following command: <code>logcat _options input_filename_...</code> The following table describes the command-line options. <br />
+</li>
+</ol>
+<dl>
+<dt><strong>Option</strong> <strong>Description</strong></dt>
+<dt><code>-o _output_file_</code></dt>
+<dd>Specifies where the command output is directed.</dd>
+<dt><code>-a</code></dt>
+<dd>Automatically generates the output filename based on the input filename. If the input is from stdin, then this option is ignored. For example:<br />
+<code>logcat -a squid-1.blog squid-2.blog squid-3.blog</code><br />
+    generates<br />
+<code>squid-1.log, squid-2.log, squid-3.log</code></dd>
+<dt><code>-S</code></dt>
+<dd>Attempts to transform the input to Squid format, if possible.</dd>
+<dt><code>-C</code></dt>
+<dd>Attempts to transform the input to Netscape Common format, if possible.</dd>
+<dt><code>-E</code></dt>
+<dd>Attempts to transform the input to Netscape Extended format, if possible.</dd>
+<dt><code>-2</code></dt>
+<dd>Attempt to transform the input to Netscape Extended-2 format, if possible. </dd>
+</dl>
+<p><strong>Note: </strong>Use only one of the following options at any given time: <code>-S</code>, <code>-C</code>, <code>-E</code>, or<code>-2</code>. <br />
+ If no input files are specified, then <code>logcat</code> reads from the standard input (<code>stdin</code>). If you do not specify an output file, then <code>logcat</code> writes to the standard output (<code>stdout</code>). 
+ For example, to convert a binary log file to an ASCII file, you can use the 
+<code>logcat</code> command with either of the following options below: 
+<code>logcat binary_file &gt; ascii_file  
+logcat -o ascii_file binary_file</code>
+The binary log file is not modified by this command. </p>
+<h2 id="RollingEventLogFiles">Rolling Event Log Files</h2>
+<p>Traffic Server provides automatic log file rolling. This means that at specific 
+intervals during the day or when log files reach a certain size, Traffic Server 
+closes its current set of log files and opens new log files. You should roll 
+log files several times a day. Rolling every six hours is a good guideline 
+to start with. </p>
+<p>Log file rolling offers the following benefits: </p>
+<ul>
+<li>It defines an interval over which log analysis can be performed. </li>
+<li>It keeps any single log file from becoming too large and helps to keep the logging system within the specified space limits. </li>
+<li>It provides an easy way to identify files that are no longer being used so that an automated script can clean the logging directory and run log analysis programs.</li>
+</ul>
+<h3 id="RolledLogFilenameFormat">Rolled Log Filename Format</h3>
+<p>Traffic Server provides a consistent naming scheme for rolled log files that 
+enables you to easily identify log files. When Traffic Server rolls a log file, 
+it saves and closes the old file before it starts a new file. Traffic Server 
+renames the old file to include the following information: </p>
+<ul>
+<li>The format of the file (such as <code>squid.log</code>). </li>
+<li>The hostname of the Traffic Server that generated the log file. </li>
+<li>Two timestamps separated by a hyphen (-). The first timestamp is a <strong>lower bound</strong> for the timestamp of the first record in the log file. The lower bound is the time when the new buffer for log records is created. Under low load, the first timestamp in the filename can be different from the timestamp of the first entry. Under normal load, the first timestamp in the filename and the timestamp of the first entry are similar. The second timestamp is an <strong>upper bound </strong>for the timestamp of the last record in the log file (this is normally the rolling time). </li>
+<li>The suffix <code>.old</code>, which makes it easy for automated scripts to find rolled log files. </li>
+</ul>
+<p>Timestamps have the following format: <br />
+<code>%Y%M%D.%Hh%Mm%Ss-%Y%M%D.%Hh%Mm%Ss</code></p>
+<p>The following table describes the format: </p>
+<dl>
+<dt><code>%Y</code></dt>
+<dd>The year in four-digit format. For example: 2000.</dd>
+<dt><code>%M</code></dt>
+<dd>The month in two-digit format, from 01-12. For example: 07.</dd>
+<dt><code>%D</code></dt>
+<dd>The day in two-digit format, from 01-31. For example: 19.</dd>
+<dt><code>%H</code></dt>
+<dd>The hour in two-digit format, from 00-23. For example: 21.</dd>
+<dt><code>%M</code></dt>
+<dd>The minute in two-digit format, from 00-59. For example: 52.</dd>
+<dt><code>%S</code></dt>
+<dd>The second in two-digit format, from 00-59. For example: 36.</dd>
+</dl>
+<p>The following is an example of a rolled log filename: <br />
+<code>squid.log.mymachine.20000912.12h00m00s-20000913.12h00m00s.old</code><br />
+</p>
+<p>The logging system buffers log records before writing them to disk. When a log file is rolled, the log buffer might be partially full. If it is, then the first entry in the new log file will have a timestamp earlier than the time of rolling. When the new log file is rolled, its first timestamp will be a lower bound for the timestamp of the first entry. </p>
+<p>For example, suppose logs are rolled every three hours, and the first rolled log file is:<br />
+<code>squid.log.mymachine.19980912.12h00m00s-19980912.03h00m00s.old</code><br />
+</p>
+<p>If the lower bound for the first entry in the log buffer at 3:00:00 is 2:59:47, then the next log file will have the following timestamp when rolled: <br />
+<code>squid.log.mymachine.19980912.02h59m47s-19980912.06h00m00s.old</code></p>
+<p>The contents of a log file are always between the two timestamps. Log files 
+do not contain overlapping entries, even if successive timestamps appear to 
+overlap. </p>
+<h3 id="RollingIntervals">Rolling Intervals</h3>
+<p>Log files are rolled at specific intervals relative to a given hour of the 
+day. Two options control when log files are rolled: </p>
+<ul>
+<li>The offset hour, which is an hour between 0 (midnight) and 23 </li>
+<li>The rolling interval </li>
+</ul>
+<p>Both the offset hour and the rolling interval determine when log file rolling 
+starts. Rolling occurs every rolling interval and at the offset hour. For example, 
+if the rolling interval is six hours and the offset hour is 0 (midnight), then 
+the logs will roll at midnight (00:00), 06:00, 12:00, and 18:00 each day. If 
+the rolling interval is 12 hours and the offset hour is 3, then logs will roll 
+at 03:00 and 15:00 each day. </p>
+<h3 id="SettingLogFileRollingOptions">Setting Log File Rolling Options</h3>
+<p>To set log file rolling options and/or configure Traffic Server to roll log 
+files when they reach a certain size, follow the steps below:</p>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.rolling_enabled_</code></dt>
+<dd>Set this variable to one of the following values:<br />
+<code>**1**</code> to enable log file rolling at specific intervals during the day.<br />
+<code>**2**</code> to enable log file rolling when log files reach a specific size.<br />
+<code>**3**</code> to enable log file rolling at specific intervals during the day or when log files reach a specific size (whichever occurs first).<br />
+<code>**4**</code> to enable log file rolling at specific intervals during the day when log files reach a specific size (at a specified time if the file is of the specified size).</dd>
+<dt><code>_proxy.config.log.rolling_size_mb_</code></dt>
+<dd>Specifies the size that log files must reach before rolling takes place.</dd>
+<dt><code>_proxy.config.log.rolling_offset_hr_</code></dt>
+<dd>Set this variable to the specific time each day you want log file rolling to 
+    take place. Traffic Server forces the log file to be rolled at the offset hour 
+    each day.</dd>
+<dt><code>_proxy.config.log.rolling_interval_sec_</code></dt>
+<dd>Set this variable to the rolling interval in seconds. The minimum value is 
+    300 seconds (5 minutes). The maximum value is 86400 seconds (one day).<strong> Note:</strong> 
+    If you start Traffic Server within a few minutes of the next rolling time, 
+    then rolling might not occur until the next rolling time.</dd>
+<dt><code>_proxy.config.log.auto_delete_rolled_file_</code></dt>
+<dd>Set this variable to 1 to enable autodeletion of rolled files.</dd>
+</dl>
+</li>
+<li>
+<p>Save and close the <code>records.config</code> file. </p>
+</li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<p>You can fine-tune log file rolling settings for a custom log file in the <code>LogObject</code> 
+specification in the <code>logs_xml.config</code> file. The custom log file uses the rolling 
+settings in its <code>LogObject</code>, which override the default settings you specify 
+in Traffic Manager or the <code>records.config</code> file described above. </p>
+<h2 id="SplittingEventLogFiles">Splitting Event Log Files</h2>
+<p>By default, Traffic Server uses standard log formats and generates log files 
+that contain HTTP &amp; ICP transactions in the same file. However, you can enable 
+log splitting if you prefer to log transactions for different protocols in 
+separate log files. </p>
+<h3 id="ICPLogSplitting">ICP Log Splitting</h3>
+<p>When ICP log splitting is enabled, Traffic Server records ICP transactions 
+in a separate log file with a name that contains <strong><code>icp</code></strong>. For example: if 
+you enable the Squid format, then all ICP transactions are recorded in the 
+<code>squid-icp.log</code> file. When you disable ICP log splitting, Traffic Server records 
+all ICP transactions in the same log file as HTTP transactions. </p>
+<h3 id="HTTPHostLogSplitting">HTTP Host Log Splitting</h3>
+<p>HTTP host log splitting enables you to record HTTP transactions for different 
+origin servers in separate log files. When HTTP host log splitting is enabled, 
+Traffic Server creates a separate log file for each origin server that's listed 
+in the<code /><a href="#EditingLogHostsConfigFile">log_hosts.config</a> file. When both ICP 
+and HTTP host log splitting are enabled, Traffic Server generates separate 
+log files for HTTP transactions (based on the origin server) and places all 
+ICP transactions in their own respective log files. For example, if the <code>log_hosts.config</code> 
+file contains the two origin servers <code>uni.edu</code> and <code>company.com</code> and Squid 
+format is enabled, then Traffic Server generates the following log files: </p>
+<p><strong>Log Filename</strong> <strong>Description</strong> </p>
+<dl>
+<dt><code>squid-uni.edu.log</code></dt>
+<dd>All HTTP transactions for <code>uni.edu</code></dd>
+<dt><code>squid-company.com.log</code></dt>
+<dd>All HTTP transactions for <code>company.com</code></dd>
+<dt><code>squid-icp.log</code></dt>
+<dd>All ICP transactions for all hosts</dd>
+<dt><code>squid.log</code></dt>
+<dd>All HTTP transactions for other hosts</dd>
+</dl>
+<p>If you disable ICP log splitting, then ICP transactions are placed in the same 
+log file as HTTP transactions. Using the hosts and log format from the previous 
+example, Traffic Server generates the log files below:</p>
+<p><strong>Log Filename</strong> <strong>Description</strong> </p>
+<dl>
+<dt><code>squid-uni.edu.log</code></dt>
+<dd>All entries for <code>uni.edu</code></dd>
+<dt><code>squid-company.com.log</code></dt>
+<dd>All entries for <code>company.com</code></dd>
+<dt><code>squid.log</code></dt>
+<dd>All other entries</dd>
+</dl>
+<p>Traffic Server also enables you to create XML-based <a href="#UsingCustomFormat">Custom Log Formats</a> 
+that offer even greater control over log file generation. </p>
+<h3 id="SettingLogSplittingOptions">Setting Log Splitting Options</h3>
+<p>To set log splitting options, follow the steps below:</p>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li />
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.separate_icp_logs_</code></dt>
+<dd>Set this variable to 1 to record all ICP transactions in a separate log file. <br />
+     Set this variable to 0 to record all ICP transactions in the same log file as HTTP transactions. <br />
+     Set this variable to -1 to filter all ICP transactions from the standard log files.</dd>
+<dt><code>_proxy.config.log.separate_host_logs_</code></dt>
+<dd>Set this variable to 1 to record HTTP transactions for each host listed in <code>log_hosts.config</code> file in a separate log file. <br />
+     Set this variable to 0 to record all HTTP transactions (for each host listed in <code>log_hosts.config</code>) in the same log file.</dd>
+</dl>
+</li>
+<li />
+<li>Save and close the <code>records.config</code> file. </li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h3 id="Editingloghosts.configFile">Editing the log_hosts.config File</h3>
+<p>The default <code>log_hosts.config</code> file is located in the Traffic Server <code>config</code> 
+directory. To record HTTP transactions for different origin servers in separate 
+log files, you must specify the hostname of each origin server on a separate 
+line in the <code>log_hosts.config</code> file. For example, if you specify the keyword 
+sports, then Traffic Server records all HTTP transactions from <code>sports.yahoo.com</code> 
+and <code>www.foxsports.com</code> in a log file called <code>squid-sports.log</code> (if the Squid 
+format is enabled). </p>
+<p><strong>Note: </strong>If Traffic Server is clustered and you enable log file collation, 
+then you should use the same <code>log_hosts.config</code> file on every Traffic Server 
+node in the cluster. </p>
+<h5 id="editloghosts.configfilefollowstepsbelow">To edit the log_hosts.config file follow the steps below:</h5>
+<ol>
+<li>In a text editor, open the <code>log_hosts.config</code> file located in the Traffic Server <code>config</code> directory. </li>
+<li>
+<p>Enter the hostname of each origin server on a separate line in the file: for example, <br />
+<code>webserver1  
+webserver2  
+webserver3</code><br />
+</p>
+</li>
+<li>
+<p>Save and close the <code>log_hosts.config</code> file. </p>
+</li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h2 id="CollatingEventLogFiles">Collating Event Log Files</h2>
+<p>You can use the Traffic Server log file collation feature to collect all logged 
+information in one place. Log collation enables you to analyze a set of Traffic 
+Server clustered nodes as a whole (rather than as individual nodes) and to 
+use a large disk that might only be located on one of the nodes in the cluster. 
+Traffic Server collates log files by using one or more nodes as log collation 
+servers and all remaining nodes as log collation clients. When a Traffic Server 
+node generates a buffer of event log entries, it first determines if it is 
+the collation server or a collation client. The collation server node writes 
+all log buffers to its local disk, just as it would if log collation was not 
+enabled. Log collation servers can be standalone or they can be part of a node 
+running Traffic Server. </p>
+<p>The collation client nodes prepare their log buffers for transfer across the 
+network and send the buffers to the log collation server. When the log collation 
+server receives a log buffer from a client, it writes it to its own log file 
+as if it was generated locally. For a visual representation of this, see the 
+figure below. </p>
+<p><img alt="" src="images/logcolat.jpg" /></p>
+<blockquote>
+<p><em><strong>Log collation </strong></em></p>
+</blockquote>
+<p>If log clients cannot contact their log collation server, then they write their 
+log buffers to their local disks, into <em>orphan</em> log files. Orphan log files 
+require manual collation. </p>
+<p><strong>Note: </strong>Log collation can have an impact on network performance. Because 
+all nodes are forwarding their log data buffers to the single collation server, 
+a bottleneck can occur.<strong> </strong>In addition, collated log files contain timestamp 
+information for each entry, but entries in the files do not appear in strict 
+chronological order. You may want to sort collated log files before doing analysis. </p>
+<p>To configure Traffic Server to collate event log files, you must perform the 
+following tasks: </p>
+<ul>
+<li>Either <a href="#ConfiguringTrafficEdgeCollationServer">Configure Traffic Server Node to Be a Collation Server</a> or install &amp; configure a <a href="#UsingStandaloneCollator"> Standalone Collator</a>. </li>
+<li><a href="#ConfiguringTrafficEdgeCollationClient">Configure Traffic Server Nodes to Be a Collation Clients</a>. </li>
+<li>Add an attribute to the <code>LogObject</code> specification in the <code>logs_xml.config</code> file if you are using custom log file formats; refer to <a href="#CollatingCustomEventLogFiles">Collating Custom Event Log Files</a>. <br />
+</li>
+</ul>
+<h3 id="ConfiguringTSBeaCollationServer">Configuring Traffic Server to Be a Collation Server</h3>
+<p>To configure a Traffic Server node to be a collation server, simply edit a 
+configuration file via the steps below. <strong> </strong>If you modify the <code>_collation 
+port_</code> or <em><code>secret</code></em> after connections between the collation server and collation 
+clients have been established, then you must restart Traffic Server. </p>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li />
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.collation_mode_</code></dt>
+<dd>Set this variable to 1 to set this Traffic Server node as a log collation server. </dd>
+</dl>
+</li>
+</ol>
+<dl>
+<dt><code>_proxy.config.log.collation_port_</code></dt>
+<dd>Set this variable to specify the port number used for communication with collation 
+    clients. The default port number is 8085.</dd>
+<dt><code>_proxy.config.log.collation_secret_</code></dt>
+<dd>Set this variable to specify the password used to validate logging data and prevent the exchange of arbitrary information.<br />
+    All collation clients must use this same secret.</dd>
+</dl>
+<ol>
+<li />
+<li>Save and close the <code>records.config</code> file. </li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<h3 id="UsingaStandaloneCollator">Using a Standalone Collator</h3>
+<p>If you do not want the log collation server to be a Traffic Server node, then 
+you can install and configure a standalone collator (SAC) that will dedicate 
+more of its power to collecting, processing, and writing log files.</p>
+<h5 id="installconfigureastandalonecollator">To install and configure a standalone collator:</h5>
+<ol>
+<li>Configure your Traffic Server nodes as log collation clients; refer to <a href="#ConfiguringTrafficEdgeCollationClient">Configuring Traffic Server to Be a Collation Client</a>. </li>
+<li>Copy the <code>sac</code> binary from the Traffic Server <code>bin</code> directory to the machine serving as the standalone collator. </li>
+<li>Create a directory called <code>config</code> in the directory that contains the <code>sac</code> binary. </li>
+<li>Create a directory called <em><code>internal</code></em> in the <code>config</code> directory you created in Step 3 (above). This directory is used internally by the standalone collator to store lock files. </li>
+<li>Copy the <code>records.config</code> file from a Traffic Server node configured to be a log collation client to the <code>config</code> directory you created in Step 3 on the standalone collator. <br />
+ The <code>records.config</code> file contains the log collation secret and the port you specified when configuring Traffic Server nodes to be collation clients. The collation port and secret must be the same for all collation clients and servers.</li>
+<li>In a text editor, open the <code>records.config</code> file on the standalone collator and edit the following variable: </li>
+<li />
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.logfile_dir_</code></dt>
+<dd>Set this variable to specify the directory on which you want to store the log files. You can specify an absolute path to the directory or a path relative to the directory from which the <code>sac</code> binary is executed.<br />
+    Note: The directory must already exist on the machine serving as the standalone collator.``</dd>
+</dl>
+</li>
+<li />
+<li>Save and close the <code>records.config</code> file. </li>
+<li>Enter the following command:<br />
+<code>sac -c config</code></li>
+</ol>
+<h3 id="ConfiguringTSBeaCollationClient">Configuring Traffic Server to Be a Collation Client</h3>
+<p>To configure a Traffic Server node to be a collation client, follow the steps 
+below. If you modify the <code>_collation port_</code> or <em><code>secret</code></em> after connections 
+between the collation clients and the collation server have been established, 
+then you must restart Traffic Server. </p>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li />
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.collation_mode_</code></dt>
+<dd>Set this variable to 2 to configure this Traffic Server node to be a log collation client and send standard formatted log entries to the collation server.<br />
+    To send custom XML-based formatted log entries to the collation server, you must add a log object specification to the <code>logs_xml.config</code> file; refer to <a href="#UsingCustomFormat">Using the Custom Format</a>.</dd>
+<dt><code>_proxy.config.log.collation_host_</code></dt>
+<dd>Hostname of the collation server.</dd>
+<dt><code>_proxy.config.log.collation_port_</code></dt>
+<dd>The port used for communication with the collation server. The default port 
+    number is 8085.</dd>
+<dt><code>_proxy.config.log.collation_secret_</code></dt>
+<dd>The password used to validate logging data and prevent the exchange of arbitrary 
+    information.</dd>
+<dt><code>_proxy.config.log.collation_host_tagged_</code></dt>
+<dd>Set this variable to 1 if you want the hostname of the collation client that generated the log entry to be included in each entry.<br />
+    Set this variable to 0 if you do not want the hostname of the collation client that generated the log entry to be included in each entry.</dd>
+<dt><code>_proxy.config.log.max_space_mb_for_orphan_logs_</code></dt>
+<dd>Set this variable to specify the maximum amount of space (in megabytes) you 
+    want to allocate to the logging directory on the collation client for storing 
+    orphan log files. Orphan log files are created when the log collation server 
+    cannot be contacted. The default value is 25 MB.</dd>
+</dl>
+</li>
+<li />
+<li>Save and close the <code>records.config</code> file. </li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<h3 id="CollatingCustomEventLogFiles">Collating Custom Event Log Files</h3>
+<p>If you use custom event log files, then you must edit the <code>logs_xml.config</code> 
+file (in addition to configuring a collation server and collation clients). </p>
+<h5 id="collatecustomeventlogfiles">To collate custom event log files:</h5>
+<ol>
+<li>On each collation client, open the <code>logs_xml.config</code> file in a text editor (located in the Traffic Server <code>config</code> directory). </li>
+<li>
+<p>Add the <code>CollationHosts</code> attribute to the <code>LogObject</code> specification, as shown below:<br />
+<code>&lt;LogObject&gt;  
+ &lt;Format = "squid"/&gt;  
+ &lt;Filename = "squid"/&gt;  
+ &lt;CollationHosts="_ipaddress_:_port_"/&gt;  
+ &lt;/LogObject&gt;</code><br />
+ where <em><code>ipaddress</code></em> is the hostname or IP address of the collation server to which all log entries (for this object) are forwarded, and <em><code>port</code></em> is the port number for communication between the collation server and collation clients. <br />
+</p>
+</li>
+<li>
+<p>Save and close the <code>logs_xml.config</code> file. </p>
+</li>
+<li>Navigate to the Traffic Server <code>bin</code> directory.</li>
+<li>Run the command <code>traffic_line -L</code> to restart Traffic Server on the local node or <code>traffic_line -M</code> to restart Traffic Server on all the nodes in a cluster.</li>
+</ol>
+<h2 id="ViewingLoggingStatistics">Viewing Logging Statistics</h2>
+<p>Traffic Server generates logging statistics that enable you to see the following 
+information: </p>
+<ul>
+<li>How many log files (formats) are currently being written. </li>
+<li>The current amount of space used by the logging directory, which contains all event and error logs. </li>
+<li>The number of access events written to log files since Traffic Server installation. This counter represents one entry in one file; if multiple formats are being written, then a single event creates multiple event log entries. </li>
+<li>The number of access events skipped (because they were filtered) since Traffic Server installation. </li>
+<li>The number of access events written to the event error log since Traffic Server installation. </li>
+</ul>
+<p>You can retrieve the statistics via the Traffic Line command-line interface; 
+refer to <a href="monitor.htm">Monitoring Traffic</a>. </p>
+<h2 id="ViewingLogFiles">Viewing Log Files</h2>
+<p>You can view the system, event, and error log files Traffic Server creates. 
+You can also delete a log file or copy it to your local systemif you have the 
+correct user permissions. Traffic Server displays only one MB of information 
+in the log file. If the log file you select to view is bigger than 1MB, then 
+Traffic Server truncates the file and displays a warning message indicating 
+that the file is too big.</p>
+<h2 id="ExampleEventLogFileEntries">Example Event Log File Entries</h2>
+<p>This section shows an example log file entry in each of the standard log formats 
+supported by Traffic Server: Squid, Netscape Common, Netscape Extended, and 
+Netscape Extended-2. </p>
+<h3 id="SquidFormat">Squid Format</h3>
+<p>The following figure shows a sample log entry in a <code>squid.log</code> file. </p>
+<p><img alt="" src="images/squid_format.jpg" /></p>
+<p>The following table describes each field. </p>
+<p><strong>Field</strong> <strong>Symbol</strong> <strong>Description</strong> </p>
+<dl>
+<dt>1</dt>
+<dd>cqtq:   The client request timestamp in Squid format; the time of the client request 
+in seconds since January 1, 1970 UTC (with millisecond resolution). </dd>
+<dt>2</dt>
+<dd>ttms:   The time Traffic Server spent processing the client request; the number of 
+milliseconds between the time the client established the connection with Traffic 
+Server and the time Traffic Server sent the last byte of the response back 
+to the client.</dd>
+<dt>3</dt>
+<dd>chi:   The IP address of the client’s host machine. </dd>
+<dt>4</dt>
+<dd>crc/pssc:   The cache result code; how the cache responded to the request: <code>HIT</code>, <code>MISS</code>, and so on. Cache result codes are described <a href="trouble.htm#0_21826">here</a>.<br />
+ The proxy response status code (the HTTP response status code from Traffic Server to client). </dd>
+<dt>5</dt>
+<dd>psql:   The length of the Traffic Server response to the client in bytes, including 
+headers and content.</dd>
+<dt>6</dt>
+<dd>cqhm:   The client request method: <code>GET</code>, <code>POST</code>, and so on.</dd>
+<dt>7</dt>
+<dd>cquc:   The client request canonical URL; blanks and other characters that might not 
+be parsed by log analysis tools are replaced by escape sequences. The escape 
+sequence is a percentage sign followed by the ASCII code number of the replaced 
+character in hex.</dd>
+<dt>8</dt>
+<dd>caun:   The username of the authenticated client. A hyphen (-) means that no authentication 
+was required.</dd>
+<dt>9</dt>
+<dd>phr/pqsn:   The proxy hierarchy route; the route Traffic Server used to retrieve the object.<br />
+ The proxy request server name; the name of the server that fulfilled the request. If the request was a cache hit, then this field contains a hyphen (-).</dd>
+<dt>10</dt>
+<dd>psct:   The proxy response content type; the object content type taken from the Traffic 
+Server response header.</dd>
+</dl>
+<h3 id="NetscapeCommon">Netscape Common</h3>
+<p>The following figure shows a sample log entry in a <code>common.log</code> file. </p>
+<p><img alt="" src="images/netscape_extended_format.jpg" /></p>
+<h3 id="NetscapeExtended">Netscape Extended</h3>
+<p>The following figure shows a sample log entry in an <code>extended.log</code> file. </p>
+<p><img alt="" src="images/netscape_extended2_format.jpg" /></p>
+<h3 id="NetscapeExtended-2">Netscape Extended-2</h3>
+<p>The following figure shows a sample log entry in an <code>extended2.log</code> file. The 
+following table describes each field. </p>
+<dl>
+<dt><strong>Field</strong> <strong>Symbol</strong> <strong>Description</strong> </dt>
+<dd>
+<p>:   <strong>Netscape Common</strong></p>
+</dd>
+<dt>1</dt>
+<dd>
+<p>chi:   The IP address of the client’s host machine.</p>
+</dd>
+<dt>2</dt>
+<dd>
+<p>:   This hyphen (-) is always present in Netscape log entries. </p>
+</dd>
+<dt>3</dt>
+<dd>
+<p>caun:   The authenticated client username. A hyphen (-) means no authentication was 
+required.</p>
+</dd>
+<dt>4</dt>
+<dd>
+<p>cqtd:   The date and time of the client request, enclosed in brackets.</p>
+</dd>
+<dt>5</dt>
+<dd>
+<p>cqtx:   The request line, enclosed in quotes.</p>
+</dd>
+<dt>6</dt>
+<dd>
+<p>pssc:   The proxy response status code (HTTP reply code).</p>
+</dd>
+<dt>7</dt>
+<dd>
+<p>pscl:   The length of the Traffic Server response to the client in bytes.</p>
+</dd>
+<dd>
+<p>:   <strong>Netscape Extended</strong></p>
+</dd>
+<dt>8</dt>
+<dd>
+<p>sssc:   The origin server response status code.</p>
+</dd>
+<dt>9</dt>
+<dd>
+<p>sshl:   The server response transfer length; the body length in the origin server response 
+to Traffic Server, in bytes.</p>
+</dd>
+<dt>10</dt>
+<dd>
+<p>cqbl:   The client request transfer length; the body length in the client request to 
+Traffic Server, in bytes.</p>
+</dd>
+<dt>11</dt>
+<dd>
+<p>pqbl:   The proxy request transfer length; the body length in the Traffic Server request 
+to the origin server. </p>
+</dd>
+<dt>12</dt>
+<dd>
+<p>cqhl:   The client request header length; the header length in the client request to 
+Traffic Server. </p>
+</dd>
+<dt>13</dt>
+<dd>
+<p>pshl:   The proxy response header length; the header length in the Traffic Server response 
+to the client.</p>
+</dd>
+<dt>14</dt>
+<dd>
+<p>pqhl:   The proxy request header length; the header length in Traffic Server request 
+to the origin server. </p>
+</dd>
+<dt>15</dt>
+<dd>
+<p>sshl:   The server response header length; the header length in the origin server response 
+to Traffic Server. </p>
+</dd>
+<dt>16</dt>
+<dd>
+<p>tts:   The time Traffic Server spent processing the client request; the number of 
+seconds between the time that the client established the connection with Traffic 
+Server and the time that Traffic Server sent the last byte of the response 
+back to the client.</p>
+</dd>
+<dd>
+<p>:   <strong>Netscape Extended2</strong></p>
+</dd>
+<dt>17</dt>
+<dd>
+<p>phr:   The proxy hierarchy route; the route Traffic Server used to retrieve the object. </p>
+</dd>
+<dt>18</dt>
+<dd>
+<p>cfsc:   The client finish status code: <code>FIN</code> if the client request completed successfully 
+or <code>INTR</code> if the client request was interrupted.</p>
+</dd>
+<dt>19</dt>
+<dd>
+<p>pfsc:   The proxy finish status code: <code>FIN</code> if the Traffic Server request to the origin 
+server completed successfully or <code>INTR</code> if the request was interrupted.</p>
+</dd>
+<dt>20</dt>
+<dd>
+<p>crc:   The cache result code; how the Traffic Server cache responded to the request: 
+HIT, MISS, and so on. Cache result codes are described <a href="trouble.htm#0_21826">here</a>. </p>
+</dd>
+</dl>
+<h2 id="SupportforTraditionalCustomLogging">Support for Traditional Custom Logging</h2>
+<p>Traffic Server supports traditional custom logging in addition to the XML-based 
+custom logging, which is more versatile and therefore recommended.</p>
+<p>Traffic Server's format converter only converts traditional log configuration 
+files named <code>logs.config</code>. If you are using a traditional log configuration 
+file with a name other than <code>logs.config</code>, then you must convert the file yourself 
+after installation; refer to <a href="">Using cust_log_fmt_cnvrt</a>. If you opt to use 
+traditional custom logging instead of the more versatile XML-based custom logging, 
+then you must enable the traditional custom logging option manually. Furthermore, 
+if you want to configure Traffic Server as a collation client that sends log 
+entries in traditional custom formats, then you must set collation options 
+manually. Use the following procedures. </p>
+<h3 id="EnablingTraditionalCustomLogging">Enabling Traditional Custom Logging</h3>
+<p>To enable custom logging, you must edit a configuration file manually. To edit 
+your existing traditional custom log formats, modify the <a href="files.htm#logs.config">logs.config</a> 
+file as before.</p>
+<h5 id="enabletraditionalcustomlogging">To enable traditional custom logging:</h5>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li />
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.custom_logs_enabled _</code></dt>
+<dd>Set this variable to 1 to enable custom logging.</dd>
+<dt><code>_proxy.config.log.xml_logs_config_</code></dt>
+<dd>Set this variable to 0 to disable XML-based custom logging.</dd>
+</dl>
+</li>
+<li />
+<li>Save and close the <code>records.config</code> file. </li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<p>To configure your Traffic Server node to be a collation client and send traditional 
+custom log files to the collation server, use the following procedure. </p>
+<h5 id="configureTSasacollationclient">To configure Traffic Server as a collation client:</h5>
+<ol>
+<li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+<li>Edit the following variables: </li>
+<li />
+<li>
+<dl>
+<dt><strong>Variable</strong> <strong>Description</strong></dt>
+<dt><code>_proxy.config.log.collation_mode_</code></dt>
+<dd>Set this variable to 3 to configure this Traffic Server node to be a log collation client and send log entries in traditional custom formats to the collation server.<br />
+     Set this variable to 4 to configure this Traffic Server node to be a log collation client and send log entries in both standard formats (Squid, Netscape) and traditional custom formats to the collation server.</dd>
+<dt><code>_proxy.config.log.collation_host_</code></dt>
+<dd>Specify the hostname of the collation server.</dd>
+<dt><code>_proxy.config.log.collation_port_</code></dt>
+<dd>Specify the port Traffic Server uses to communicate with the collation server. 
+    The default port number is 8085.</dd>
+<dt><code>_proxy.config.log.collation_secret_</code></dt>
+<dd>Specify the password used to validate logging data and prevent exchange of 
+    arbitrary information.</dd>
+<dt><code>_proxy.config.log.collation_host_tagged_</code></dt>
+<dd>Set this variable to 1 if you want the hostname of the collation client that generated the log entry to be included in each entry.<br />
+    Set this variable to 0 if you do not want the hostname of the collation client that generated the log entry to be included in each entry. </dd>
+</dl>
+</li>
+<li />
+<li>Save and close the <code>records.config</code> file. </li>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h3 id="Usingcustlogfmtcnvrt">Using cust_log_fmt_cnvrt</h3>
+<p>The format converter <code>cust_log_fmt_cnvrt</code> converts your traditional custom 
+log configuration file (<code>logs.config</code>) to an XML-based custom log configuration 
+file (<code>logs_xml.config</code>). This enables you to use XML-based custom logging. </p>
+<h5 id="runformatconverter">To run the format converter:</h5>
+<ol>
+<li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+<li>Enter the command <code>cust_log_fmt_cnvrt</code> and include the options you want to use. <br />
+ The format of the command is <br />
+<code>cust_log_fmt_cnvrt [-o output_file | -a] [-hnVw] [input_file..]</code><br />
+</li>
+</ol>
+<dl>
+<dt>The following table describes the command-line options.</dt>
+<dt>3.</dt>
+<dt>4. <strong>Option</strong> <strong>Description</strong></dt>
+<dt><code>-o _output_file_</code></dt>
+<dd>Specifies the name of the output file; you can specify one output file only. If you specify multiple input files, then the converter combines the converted output from all files into a single output file. <br />
+     This option and the<strong> <code>-a</code></strong> option are mutually exclusive. If you want to create multiple output files from multiple input files, then you must use the<strong> <code>-a</code></strong> option. If you do not specify an output file (using the <code>**-o**</code> or <code>**-a**</code> options), then output goes to <code>stdout</code>.</dd>
+<dt><code>-a</code></dt>
+<dd>Generates one output file for each input file. The format converter automatically creates the name of the output file from the name of the input file by replacing <code>.config</code> at the end of the filename with <code>_xml.config</code>.<br />
+<strong>Note:</strong> If the source filename does not contain a <code>.config</code> extension, then the converter creates the new filename by appending <code>_xml.config</code> to the source filename.</dd>
+<dt><code>-h</code></dt>
+<dd>Displays a description of the <code>cust_log_fmt_cnvrt</code> options.</dd>
+<dt><code>-n</code></dt>
+<dd>Annotates the output file(s) with comments about the success or failure of 
+    the translation process for each of the input lines. This option produces a 
+    comment at the beginning of the output file(s) that describes errors the format 
+    converter encountered while converting the file. The comment includes line 
+    number, input line type (format, filter, or unknown), and either a success 
+    status or a description of the error encountered.</dd>
+<dt><code>-V</code></dt>
+<dd>Displays the version of the format converter you are running.</dd>
+<dt><code>-w</code></dt>
+<dd>Overwrites existing output files without warning. If you do not specify the<strong><code>-w</code></strong> option, then the format converter does not overwrite existing output 
+    files. If you specify an output file that already exists, then the converter 
+    does not convert the input file.</dd>
+<dt><em><code>input file</code></em></dt>
+<dd>Specifies the name of the input file. If you do not specify an input filename, 
+    then the format converter takes the input from <code>stdin</code>.</dd>
+</dl>
+<ol>
+<li />
+</ol>
+<h4 id="Examples">Examples</h4>
+<p>The following example converts the file <code>logs.config</code> and sends the results to <code>stdout</code>: <br />
+<code>cust_log_fmt_cnvrt logs.config</code> <br />
+</p>
+<p>The following example converts a <code>logs.config</code> file into a <code>logs_xml.config</code> file and annotates the output file (<code>logs_xml.config</code>) with comments about the success or failure of the translation process. If a file named <code>logs_xml.config</code> already exists, then the format converter overwrites it. <br />
+<code>cust_log_fmt_cnvrt -o logs_xml.config -n -w logs.config</code><br />
+</p>
+<p>The following example converts the files <code>x.config</code>, <code>y.config</code>, and <code>z.config</code> into three separate output files called <code>x_xml.config</code>, <code>y_xml.config</code>, and <code>z_xml.config</code>: <br />
+<code>cust_log_fmt_cnvrt -a x.config y.config z.config</code> <br />
+</p>
+<ul>
+<li><a href="intro.htm">Overview</a></li>
+<li><a href="getstart.htm">Getting Started</a></li>
+<li><a href="http.htm">HTTP Proxy Caching </a></li>
+<li><a href="explicit.htm">Explicit Proxy Caching</a></li>
+<li><a href="reverse.htm">Reverse Proxy and HTTP Redirects</a></li>
+<li><a href="hier.htm">Hierarchical Caching</a></li>
+<li><a href="cache.htm">Configuring the Cache</a></li>
+<li><a href="monitor.htm">Monitoring Traffic</a></li>
+<li><a href="configure.htm">Configuring Traffic Server</a></li>
+<li><a href="secure.htm">Security Options</a></li>
+<li><a href="log.htm">Working with Log Files</a></li>
+<li><a href="cli.htm">Traffic Line Commands</a></li>
+<li><a href="logfmts.htm">Event Logging Formats</a></li>
+<li><a href="files.htm">Configuration Files</a> </li>
+<li><a href="errors.htm">Traffic Server Error Messages</a></li>
+<li><a href="trouble.htm">FAQ and Troubleshooting Tips</a></li>
+<li><a href="ts_admin_chinese.pdf">Traffic Server 管理员指南</a> (PDF)</li>
+</ul>
+<p>Copyright © 2011 <a href="http://www.apache.org/">The Apache Software Foundation</a>. 
+Licensed under the <a href="http://www.apache.org/licenses/">Apache License</a>, Version 
+2.0. Apache Traffic Server, Apache, the Apache Traffic Server logo, and the 
+Apache feather logo are trademarks of The Apache Software Foundation.</p>
+    </div>
+  </div><!-- main -->
+
+  <div id="footer">
+	  Copyright  &copy; 2010
+	  <a href="http://www.apache.org/">The Apache Software Foundation</a>.
+	  Licensed under
+	  the <a href="http://www.apache.org/licenses/">Apache License</a>,
+	  Version 2.0. Apache Traffic Server, Apache,
+	  the Apache Traffic Server logo, and the Apache feather logo are
+	  trademarks of The Apache Software Foundation.
+	  <span id="apache_logo">
+		  <a href="http://www.apache.org/"><img alt="The Apache Software Foundation" src="http://www.apache.org/images/feather-small.gif" /></a>
+	  </span>
+  </div>
+
+  </body>
+</html>