You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@trafficserver.apache.org by bu...@apache.org on 2013/12/24 19:32:22 UTC

svn commit: r891679 [6/24] - in /websites/staging/trafficserver/trunk: cgi-bin/ content/ content/docs/ content/docs/trunk/ content/docs/trunk/admin/ content/docs/trunk/admin/cluster-howto/ content/docs/trunk/admin/configuration-files/ content/docs/trun...

Added: websites/staging/trafficserver/trunk/content/docs/v2/admin/intro.htm
==============================================================================
--- websites/staging/trafficserver/trunk/content/docs/v2/admin/intro.htm (added)
+++ websites/staging/trafficserver/trunk/content/docs/v2/admin/intro.htm Tue Dec 24 18:32:14 2013
@@ -0,0 +1,91 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+<html>
+<head>
+<title>Apache Traffic Server Administrator's Guide</title>
+
+<link href="http://trafficserver.apache.org/docs/trunk/admin/" rel="canonical" />
+<!--#include file="top.html" -->
+
+<h1>Overview</h1>
+<p>Apache Traffic Server&trade; speeds Internet access, enhances website performance, and delivers unprecedented web hosting capabilities.  </p>
+<p>This chapter discusses the following topics: </p>
+<ul>
+<li><a href="#WhatIsTrafficEdge">What Is Apache Traffic Server?</a></li>
+<li><a href="#TrafficEdgeDeploymentOptions">Traffic Server Deployment Options</a></li>
+<li><a href="#TrafficEdgeComponents">Traffic Server Components</a></li>
+<li><a href="#TrafficAnalysisOptions">Traffic Analysis Options</a></li>
+<li><a href="#TrafficEdgeSecurityOptions">Traffic Server Security Options</a></li>
+</ul>
+<h2 id="WhatIsTrafficEdge">What Is Apache Traffic Server?</h2>
+<p>Global data networking has become part of  everyday life: internet users request billions of documents and terabytes of data, on a daily basis, to and from all parts of the world. Information is free, abundant, and accessible. Unfortunately, global data networking can also be a nightmare for IT professionals as they struggle with overloaded servers and congested networks. It can be challenging to consistently and reliably accommodate society’s growing data demands. </p>
+<p>Traffic Server is a high-performance web proxy cache that improves network efficiency and performance by caching frequently-accessed information at the edge of the network. This brings content physically closer to end users, while enabling faster delivery and  reduced bandwidth use. Traffic Server is designed to improve content delivery for enterprises, Internet service providers (ISPs), backbone providers, and large intranets by maximizing existing and available bandwidth.  </p>
+<h2 id="TrafficEdgeDeploymentOptions">Traffic Server Deployment Options </h2>
+<p>To best suit your needs, Traffic Server can be deployed in several ways:</p>
+<ul>
+  <li>As a web proxy cache</li>
+  <li>As a reverse proxy</li>
+  <li>In a cache hierarchy</li>
+</ul>
+<p>The following sections provide a summary of these Traffic Server deployment options. </p>
+<h3>Traffic Server as a Web Proxy Cache </h3>
+<p>As a <b>web proxy cache</b>, Traffic Server receives user requests for web content as those requests travel to the destined web server (origin server). If Traffic Server contains the requested content, then it serves the content directly. If the requested content is not available from cache, then Traffic Server acts as a proxy: it obtains the content from the origin server on the user’s behalf and also keeps a copy to satisfy future requests. </p>
+<p>Traffic Server provides<b> explicit proxy caching</b>, in which the user’s client software must be configured to send requests directly to Traffic Server. Explicit proxy caching is described in the <a href="explicit.htm">Explicit Proxy Caching</a> section.</p>
+<h3>Traffic Server as a Reverse Proxy </h3>
+<p>As a <b>reverse proxy</b>, Traffic Server is configured to be the origin server to which the user is trying to connect (typically, the origin server’s advertised hostname resolves to Traffic Server, which acts as the real origin server). The reverse proxy feature is also called <b>server acceleration</b>. Reverse proxy is described in more detail in <a href="reverse.htm">Reverse Proxy and HTTP Redirects</a>. </p>
+<h3>Traffic Server in a Cache Hierarchy  </h3>
+<p>Traffic Server can participate in flexible <b>cache hierarchies</b>, in which Internet requests not fulfilled from one cache are routed to other regional caches, thereby leveraging the contents and proximity of nearby caches. In a hierarchy of proxy servers, Traffic Server can act either as a parent or a child cache to other Traffic Server systems or  to similar caching products. </p>
+<p>Traffic Server supports ICP (Internet Cache Protocol) peering. Hierarchical caching is described in more detail in <a href="hier.htm">Hierarchical Caching</a>.</p>
+<h2 id="TrafficEdgeComponents">Traffic Server Components </h2>
+<p>Traffic Server consists of several components that work together to form a web proxy cache  you can easily monitor and configure. These main components are described below. </p>
+<h3>The Traffic Server Cache</h3>
+<p> The Traffic Server cache consists of a high-speed object database called the <b>object store</b>. The object store indexes objects according to URLs and associated headers. Using sophisticated object management, the object store can cache alternate versions of the same object (perhaps in a different language or encoding type). It can also efficiently store very small and very large objects, thereby minimizing wasted space. When the cache is full, Traffic Server removes stale data to ensure that the most requested objects are  readily available and fresh.  </p>
+<p>Traffic Server is designed to tolerate total disk failures on any of the cache disks. If the disk fails completely, then Traffic Server marks the entire disk as corrupt and continues to use remaining disks. If all of the cache disks fail, then Traffic Server switches to proxy-only mode. You can partition the cache to reserve a certain amount of disk space for storing data for specific protocols and origin servers. For more information about the cache, see <a href="cache.htm">Configuring the Cache</a>. </p>
+<h3>The RAM Cache </h3>
+<p>Traffic Server maintains a small RAM cache that contains extremely popular objects. This <b>RAM cache </b>serves the most popular objects as fast as possible and reduces load on disks, especially during temporary traffic peaks. You can configure the RAM cache size to suit your needs; for detailed information, refer to <a href="cache.htm#ChangingSizeRAMCache">Changing the Size of the RAM Cache</a>.</p>
+<h3>The Host Database </h3>
+<p>The Traffic Server host database stores the domain name server (DNS) entries of origin servers to which Traffic Server connects to fulfill user requests. This information is used to adapt future protocol interactions and optimize performance.  Along with other information, the host database tracks: </p>
+<ul>
+  <li>DNS information (for fast conversion of hostnames to IP addresses) </li>
+  <li>The HTTP version of each host (so advanced protocol features can be used with hosts running modern servers) </li>
+  <li>Host reliability and availability information (so users will not wait for servers that are not running) </li>
+</ul>
+<h3>The DNS Resolver </h3>
+<p>Traffic Server includes a fast, asynchronous DNS resolver to streamline conversion of hostnames to IP addresses. Traffic Server implements the DNS resolver natively by directly issuing DNS command packets rather than relying on slower, conventional resolver libraries. Since many DNS queries can be issued in parallel and a fast DNS cache maintains popular bindings in memory, DNS traffic is reduced. </p>
+<h3>Traffic Server Processes </h3>
+<p>Traffic Server contains three processes that work together to serve Traffic Server requests and manage/control/monitor the health of the Traffic Server system. The three processes are described below: </p>
+<ul>
+  <li>The <code>traffic_server</code> process is the transaction processing engine of Traffic Server. It is responsible for accepting connections, processing protocol requests, and serving documents from the cache or origin server. </li>
+  <li>The <code>traffic_manager</code> process is the command and control facility of the Traffic Server, responsible for launching, monitoring, and reconfiguring the <code>traffic_server</code> process. The <code>traffic_manager</code> process is also responsible for the proxy autoconfiguration port, the statistics interface, cluster administration, and virtual IP failover. <br />
+    If the <code>traffic_manager</code> process detects a <code>traffic_server</code> process failure, it instantly restarts the process but also maintains a connection queue of all incoming requests. All incoming connections that arrive in the several seconds before full server restart are saved in the connection queue and processed in first-come, first-served order. This connection queueing shields users from any server restart downtime. </li>
+  <li>The <code>traffic_cop</code> process monitors the health of both the <code>traffic_server</code> and <code>traffic_manager</code> processes. The <code>traffic_cop</code> process periodically (several times each minute) queries the <code>traffic_server</code> and <code>traffic_manager</code> process by issuing heartbeat requests to fetch synthetic web pages. In the event of failure (if no response is received within a timeout interval or if an incorrect response is received), <code>traffic_cop</code> restarts the <code>traffic_manager</code> and <code>traffic_server</code> processes. </li>
+</ul>
+<p>The figure below illustrates the three Traffic Server processes.</p>
+<p><img src="images/process.jpg" width="848" height="578" /></p>
+<p>&nbsp;</p>
+<h3>Administration Tools</h3>
+<p>Traffic Server offers the following administration options: </p>
+<ul>
+  <li>The  <b>Traffic Line</b> command-line interface is a text-based interface from which you can monitor Traffic Server performance and network traffic, as well as configure the Traffic Server system. From Traffic Line, you can execute individual commands or script a series of commands in a shell. </li>
+  <li>The <b>Traffic Shell </b>command-line interface is an additional command-line tool that enables you to execute individual commands that monitor and configure the Traffic Server system. </li>
+  <li>Various <b>configuration</b> <b>files</b> enable you to configure Traffic Server through a simple file-editing and signal-handling interface. Any changes you make through Traffic Line or Traffic Shell are automatically made to the configuration files as well. </li>
+</ul>
+<h2 id="TrafficAnalysisOptions">Traffic Analysis Options </h2>
+<p>Traffic Server provides several options for network traffic analysis and monitoring: </p>
+<ul>
+  <li><b>Traffic Line</b> and <b>Traffic Shell</b> enable you to collect and process statistics obtained from network traffic information. </li>
+  <li><b>Transaction logging</b> enables you to record information (in a log file) about every request  Traffic Server receives and every error it detects. By analyzing the log files, you can determine how many people used the Traffic Server cache, how much information each person requested, and what pages were most popular. You can also see why a particular transaction was in error and what state the Traffic Server was in at a particular time; for example, you can see that Traffic Server was restarted or that cluster communication timed out. <br />
+    Traffic Server supports several standard log file formats, such as Squid and Netscape, and its own custom format. You can analyze the standard format log files with off-the-shelf analysis packages. To help with log file analysis, you can separate log files so that they contain information specific to protocol or hosts. </li>
+</ul>
+<p>Traffic analysis options are described in more detail in <a href="monitor.htm">Monitoring Traffic</a>; Traffic Server logging options are described in <a href="log.htm">Working with Log Files</a>. </p>
+<h2 id="TrafficEdgeSecurityOptions">Traffic Server Security Options </h2>
+<p>Traffic Server provides numerous options that enable you to establish secure communication between the Traffic Server system and other computers on the network. Using the security options, you can do the following: </p>
+<ul>
+  <li>Control client access to the Traffic Server proxy cache. </li>
+  <li>Configure Traffic Server to use multiple DNS servers to match your site’s security configuration. For example, Traffic Server can use different DNS servers, depending on whether it needs to resolve hostnames located inside or outside a firewall. This enables you to keep your internal network configuration secure while continuing to provide transparent access to external sites on the Internet. </li>
+  <li>Configure Traffic Server to verify that clients are authenticated before they can access content from the Traffic Server cache. </li>
+  <li>Secure connections in reverse proxy mode between a client and Traffic Server, and Traffic Server and the origin server, using the SSL termination option. </li>
+  <li>Control access via SSL (Secure Sockets Layer). </li>
+</ul>
+<p>Traffic Server security options are described in more detail in <a href="secure.htm">Security Options</a>.</p>
+
+<!--#include file="bottom.html" -->

Propchange: websites/staging/trafficserver/trunk/content/docs/v2/admin/intro.htm
------------------------------------------------------------------------------
    svn:executable = *

Added: websites/staging/trafficserver/trunk/content/docs/v2/admin/leftnav.html
==============================================================================
--- websites/staging/trafficserver/trunk/content/docs/v2/admin/leftnav.html (added)
+++ websites/staging/trafficserver/trunk/content/docs/v2/admin/leftnav.html Tue Dec 24 18:32:14 2013
@@ -0,0 +1,19 @@
+<ul class="leftnav">
+<li><a href="index.htm">Preface</a></li>
+<li><a href="intro.htm">Overview</a></li>
+<li><a href="getstart.htm">Getting Started</a></li>
+<li><a href="http.htm">HTTP Proxy Caching </a></li>
+<li><a href="explicit.htm">Explicit Proxy Caching</a></li>
+<li><a href="reverse.htm">Reverse Proxy and HTTP Redirects</a></li>
+<li><a href="hier.htm">Hierarchical Caching</a></li>
+<li><a href="cache.htm">Configuring the Cache</a></li>
+<li><a href="monitor.htm">Monitoring Traffic</a></li>
+<li><a href="configure.htm">Configuring Traffic Server</a></li>
+<li><a href="secure.htm">Security Options</a></li>
+<li><a href="log.htm">Working with Log Files</a></li>
+<li><a href="cli.htm">Traffic Line Commands</a></li>
+<li><a href="logfmts.htm">Event Logging Formats</a></li>
+<li><a href="files.htm">Configuration Files</a> </li>
+<li><a href="errors.htm">Traffic Server Error Messages</a></li>
+<li><a href="trouble.htm">FAQ and Troubleshooting Tips</a></li>
+</ul>
\ No newline at end of file

Propchange: websites/staging/trafficserver/trunk/content/docs/v2/admin/leftnav.html
------------------------------------------------------------------------------
    svn:executable = *

Added: websites/staging/trafficserver/trunk/content/docs/v2/admin/log.htm
==============================================================================
--- websites/staging/trafficserver/trunk/content/docs/v2/admin/log.htm (added)
+++ websites/staging/trafficserver/trunk/content/docs/v2/admin/log.htm Tue Dec 24 18:32:14 2013
@@ -0,0 +1,991 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+<html>
+<head>
+<title>Traffic Server Administrator's Guide</title>
+
+<link href="http://trafficserver.apache.org/docs/trunk/admin/working-log-files/index.en.html" rel="canonical" />
+<!--#include file="top.html" -->
+
+<h1>Working with Log Files</h1>
+<p>Traffic Server generates log files that contain information about every request it receives and every error it detects.</p>
+<p>This chapter discusses the following topics: </p>
+<ul>
+<li><a href="#UnderstandingTrafficEdgeLogFiles">Understanding Traffic Server Log Files</a></li> 
+<li><a href="#UnderstandingEventLogFiles">Understanding Event Log Files</a></li> 
+<li><a href="#ManagingEventLogFiles">Managing Event Log Files</a></li> 
+<li><a href="#ChoosingEventLogFileFormats">Choosing Event Log File Formats</a></li> 
+<li><a href="#RollingEventLogFiles">Rolling Event Log Files</a></li> 
+<li><a href="#SplittingEventLogFiles">Splitting Event Log Files</a></li> 
+<li><a href="#CollatingEventLogFiles">Collating Event Log Files</a></li> 
+<li><a href="#ViewingLoggingStatistics">Viewing Logging Statistics</a></li> 
+<li><a href="#ViewingLogFiles">Viewing Log Files</a></li> 
+<li><a href="#ExampleEventLogFileEntries">Example Event Log File Entries</a></li> 
+<li><a href="#SupportTraditionalCustomLogging">Support for Traditional Custom Logging</a></li> 
+</ul>
+<h2 id="UnderstandingTrafficEdgeLogFiles">Understanding Traffic Server Log Files</h2>
+<p>Traffic Server records information about every transaction (or request)  it processes and every error  it detects in log files. Traffic Server keeps three types of log files: </p>
+<ul>
+  <li><b>Error log files</b> record information about why a particular transaction was in error.  </li>
+  <li><b>Event log files</b> (also called <b>access log files</b>) record information about the state of each transaction  Traffic Server processes.  </li>
+  <li><b>System log files</b> record system information,  including messages about the state of Traffic Server and  errors/warnings  it produces. This kind of information might include a note that event log files were rolled, a warning that cluster communication timed out, or an error indicating that Traffic Server was restarted. <br />
+  All system information messages are logged with the system-wide logging facility <b><code>syslog</code></b> under the daemon facility. The <code>syslog.conf</code> configuration file (stored in the <code>/etc</code> directory) specifies where these messages are logged. A typical location is <code>/var/log/messages</code> (Linux). <br />
+  The <code>syslog</code> process works on a system-wide basis, so it serves as the single repository for messages from all Traffic Server processes (including <code>traffic_server</code>, <code>traffic_manager</code>, and <code>traffic_cop</code>). <br />
+  System information logs observe a static format. Each log entry in the log contains information about the date and time the error was logged, the hostname of the Traffic Server that reported the error, and a description of the error or warning. <br />
+  Refer to <a href="errors.htm">Traffic Server Error Messages</a> for a list of the  messages logged by Traffic Server. </li>
+</ul>
+<p>By default, Traffic Server creates both error and event log files and records system information in system log files. You can disable event logging and/or error logging by setting the configuration variable <code><i>proxy.config.log.logging_enabled</i></code> (in the <code>records.config</code> file) to one of the following values: <ul> 
+
+  <li><code>0</code> to disable both event and error logging </li>
+  <li><code>1</code> to enable error logging only </li>
+  <li><code>2</code> to enable transaction logging only </li>
+  <li><code>3</code> to enable both transaction and error logging</li>
+</ul>
+<h2 id="UnderstandingEventLogFiles">Understanding Event Log Files</h2>
+<p>Event log files record information about every request that Traffic Server processes. By analyzing the log files, you can determine how many people use the Traffic Server cache, how much information each person requested, what pages are most popular, and so on. Traffic Server supports several standard log file formats, such as Squid and Netscape, as well as user-defined custom formats. You can analyze the standard format log files with off-the-shelf analysis packages. To help with log file analysis, you can separate log files so  they contain information specific to protocol or hosts. You can also configure Traffic Server to roll log files automatically at specific intervals during the day or when they reach a certain size.</p>
+<p>The following sections describe the Traffic Server logging system features and discuss how to:</p>
+<ul>
+  <li><b>Manage your event log files</b><br />
+  You can choose a central location for storing log files, set how much disk space to use for log files, and set how and when to roll log files. Refer to <a href="#ManagingEventLogFiles">Managing Event Log Files</a>. </li>
+  <li><b>Choose different event log file formats </b><br />
+  You can choose which standard log file formats you want to use for traffic analysis, such as Squid or Netscape. Alternatively, you can use the Traffic Server custom format, which is XML-based and enables you to institute more control over the type of information recorded in log files. Refer to <a href="#ChoosingEventLogFileFormats">Choosing Event Log File Formats</a>. <br />
+  </li>
+  <li><b>Roll event log files automatically</b><br />
+  
+  Configure Traffic Server to roll event log files at specific intervals during the day or when they reach a certain size; this enables you to  identify and manipulate log files that are no longer active. Refer to <a href="#RollingEventLogFiles">Rolling Event Log Files</a>. </li>
+  <li><b>Separate log files according to protocols and hosts</b><br />
+  Configure Traffic Server to create separate log files for different protocols. You can also configure Traffic Server to generate separate log files for requests served by different hosts. Refer to <a href="#SplittingEventLogFiles">Splitting Event Log Files</a>. </li>
+  <li><b>Collate log files from different Traffic Server nodes</b><br />
+  Designate one or more nodes on the network to serve as log collation servers. These servers, which might  be standalone or part of Traffic Server, enable you to keep all logged information in well-defined locations. Refer to <a href="#CollatingEventLogFiles">Collating Event Log Files</a>. </li>
+  <li><b>View statistics about the logging system</b><br />
+  Traffic Server provides statistics about the logging system; you can access these statistics via  Traffic Line. Refer to <a href="#ViewingLoggingStatistics">Viewing Logging Statistics</a>. </li>
+  <li><b>Interpret log file entries for the log file formats</b><br /> 
+  Refer to <a href="#ExampleEventLogFileEntries">Example Event Log File Entries</a>. </li>
+</ul>
+<h2 id="ManagingEventLogFiles">Managing Event Log Files</h2>
+<p>Traffic Server enables you to control where event log files are located and how much space they can consume. Additionally you can specify how to handle low disk space in the logging directory. </p>
+<h3>Choosing the Logging Directory </h3>
+<p>By default, Traffic Server writes all event log files in the <code>logs</code> directory located in the directory where you installed Traffic Server. To use a different directory, refer to <a href="#SettingLogFileManagementOptions">Setting Log File Management Options</a>. </p>
+<h3>Controlling Logging Space </h3>
+<p>Traffic Server enables you to control the amount of disk space that the logging directory can consume. This allows the system to operate smoothly within a specified space window for a long period of time.  After you establish a space limit, Traffic Server continues to monitor the space in the logging directory. When the free space dwindles to the headroom limit (see <a href="#SettingLogFileManagementOptions">Setting Log File Management Options</a>), it enters a low space state and takes the following actions: </p>
+<ul>
+  <li>If the autodelete option (discussed in <a href="#RollingEventLogFiles">Rolling Event Log Files</a>) is <em>enabled</em>, then Traffic Server identifies previously-rolled log files (i.e., log files with the <code>.old</code> extension). It starts deleting files one by one, beginning with the oldest file, until it emerges from the low state. Traffic Server logs a record of all deleted files in the system error log. </li> </br>
+  <li>If the autodelete option is <em>disabled</em> or there are not enough old log files to delete for the system to emerge from its low space state, then Traffic Server issues a warning and continues logging until space is exhausted. When avilable space is consumed, event logging stops. Traffic Server resumes event logging when enough space becomes available for it to exit the low space state. To make space available, either explicitly increase the logging space limit or remove files from the logging directory manually. </li>
+</ul>
+<p>You can run a <code>cron</code> script in conjunction with Traffic Server to automatically remove old log files from the logging directory before Traffic Server enters the low space state. Relocate the old log files to a temporary partition, where you can run a variety of log analysis scripts. Following analysis, either compress the logs and move  to an archive location, or simply delete them. </p>
+<h3 id="SettingLogFileManagementOptions">Setting Log File Management Options </h3>
+<p>To set log management options, follow the steps below:</p>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+  <li>Edit the following variables: </li>
+  <table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><i><code>proxy.config.log.logfile_dir</code></i></td>
+      <td>Specify the path to the directory in which you want to store event log files. This can be an absolute path or a path relative to the directory in which Traffic Server is installed. The default is <code>logs</code> located in the Traffic Server installation directory.<br />
+      <b>Note:</b> The directory you specify must already exist.</td>
+  </tr>
+   <tr>
+      <td><i><code>proxy.config.log.max_space_mb_for_logs</code></i></td>
+      <td>Enter the maximum amount of space you want to allocate to the logging directory. The default value is 2000 MB.<br />
+      <b>Note:</b> All files in the logging directory contribute to the space used, even if they are not log files.</td>
+  </tr>
+   <tr>
+      <td><i><code>proxy.config.log.max_space_mb_headroom</code></i></td>
+      <td>Enter the tolerance for the log space limit. The default value is 10 MB.</td>
+  </tr>
+</table>
+  <li>Save and close the <code>records.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<h2 id="ChoosingEventLogFileFormats">Choosing Event Log File Formats</h2>
+<p>Traffic Server supports the following log file formats: </p>
+<ul>
+  <li>Standard formats, such as Squid or Netscape; refer to <a href="#UsingStandardFormats">Using Standard Formats</a>. </li>
+  <li>The Traffic Server custom format; refer to <a href="#UsingCustomFormat">Using the Custom Format</a>. </li>
+</ul>
+<p>In addition to the standard and custom log file format, you can choose whether to save log files in binary or ASCII; refer to <a href="#ChoosingBinaryASCII">Choosing Binary or ASCII</a>. <br />
+Event log files consume substantial disk space. Creating log entries in multiple formats at the same time can consume disk resources very quickly and adversely impact Traffic Server performance. </p>
+<h3 id="UsingStandardFormats">Using Standard Formats </h3>
+<p>The standard log formats include Squid, Netscape Common, Netscape extended, and Netscape Extended-2. The standard log file formats can be analyzed with a wide variety of off-the-shelf log-analysis packages. You should use one of the standard event log formats unless you need information that these formats do not provide. Refer to <a href="#UsingCustomFormat">Using the Custom Format</a>.</p>
+<h4 id="SettingStandardLogFileFormatOptions">Setting Standard Log File Format Options </h4>
+<p>Set standard log file format options by following the steps below:</p>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+  <li>To use the Squid format, edit the following variables:</li>
+  <table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.squid_log_enabled</i></code></td>
+      <td>Set this variable to 1 to enable the Squid log file format.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.squid_log_is_ascii</i></code></td>
+      <td>Set this variable to 1 to enable ASCII mode.<br />Set this variable to 0 to enable binary mode.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.squid_log_name</i></code></td>
+      <td>Enter the name you want to use for Squid event log files. The default is <code>squid</code>.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.squid_log_header</i></code></td>
+      <td>Enter the header text you want to display at the top of the Squid log files. Enter <code>NULL</code> if you do not want to use a header. </td>
+  </tr>
+</table>
+  <li>To use the Netscape Common format, edit the following variables: </li>
+  <table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.common_log_enabled</i></code></td>
+      <td>Set this variable to 1 to enable the Netscape Common log file format.</td>
+  </tr>
+  <tr>
+      <td><code>proxy.config.log.common_log_is_ascii</code></td>
+      <td>Set this variable to 1 to enable ASCII mode.<br />Set this variable to 0 to enable binary mode.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.common_log_name</i></code></td>
+      <td>Enter the name you want to use for Netscape Common event log files. The default is <code>common</code>.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.common_log_header</i></code></td>
+      <td>Enter the header text you want to display at the top of the Netscape Common log files. Enter <code>NULL</code> if you do not want to use a header.</td>
+  </tr>
+</table>
+  <li>To use the Netscape Extended format, edit the following variables: </li>
+  <table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.extended_log_enabled</i></code></td>
+      <td>Set this variable to 1 to enable the Netscape Extended log file format.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.extended_log_is_ascii</i></code></td>
+      <td>Set this variable to 1 to enable ASCII mode.<br />Set this variable to 0 to enable binary mode.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.extended_log_name</i></code></td>
+      <td>Enter the name you want to use for Netscape Extended event log files. The default is <code>extended</code>.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.extended_log_header</i></code></td>
+      <td>Enter the header text you want to display at the top of the Netscape Extended log files. Enter <code>NULL</code> if you do not want to use a header.</td>
+  </tr>
+</table>
+  <li>To use the Netscape Extended-2 format, edit the following variables:</li>
+  <table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.extended2_log_enabled</i></code></td>
+      <td>Set this variable to 1 to enable the Netscape Extended-2 log file format.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.extended2_log_is_ascii</i></code></td>
+      <td>Set this variable to 1 to enable ASCII mode.<br />Set this variable to 0 to enable binary mode.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.extended2_log_name</i></code></td>
+      <td>Enter the name you want to use for Netscape Extended-2 event log files. The default is <code>extended2</code>.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.extended2_log_header</i></code></td>
+      <td>Enter the header text you want to display at the top of the Netscape Extended-2 log files. Enter <code>NULL</code> if you do not want to use a header.</td>
+  </tr>
+</table>
+<li>Save and close the <code>records.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory.</li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h3 id="UsingCustomFormat">Using the Custom Format</h3>
+<p>The XML-based custom log format is more flexible then the standard log file formats and gives you more control over the type of information recorded in  log files. You should create a custom log format if you need data for analysis that's not available in the standard formats. You can decide what information to record for each Traffic Server transaction and create filters that specify which transactions to log. </p>
+<p>The heart of the XML-based custom logging feature is the XML-based logging configuration file (<code>logs_xml.config</code>) that enables you to create very modular descriptions of logging objects. The <code>logs_xml.config</code> file uses three types of objects to create custom log files, as detailed below. To generate a custom log format, you must specify at least one <code>LogObject</code> definition (one log file is produced for each <code>LogObject</code> definition).  </p>
+<ul>
+  <li>The <code><b>LogFormat</b></code> object defines the content of the log file using printf-style format strings. </li>
+  <li>The <code><b>LogFilter</b></code> object defines a filter so that you include or exclude certain information from the log file. </li>
+  <li>The <code><b>LogObject</b></code> object specifies all the information needed to produce a log file. Items marked with an asterisk (*) are required.</li>
+ <ul>
+ <li>*The name of the log file. </li>
+  <li>*The format to be used. This can be a standard format (Squid or Netscape) or a previously-defined custom format (i.e., a previously-defined <code>LogFormat</code> object). </li>
+  <li>The file mode: <code>ASCII</code>, <code>Binary</code>, or <code>ASCII_PIPE</code>. The default is <code>ASCII</code>. <br />
+    The <code>ASCII_PIPE</code> mode writes log entries to a UNIX-named pipe (a buffer in memory); other processes can then read the data using standard I/O functions. The advantage of this option is that Traffic Server does not have to write to disk, which frees disk space and bandwidth for other tasks. When the buffer is full, Traffic Server drops log entries and issues an error message indicating how many entries were dropped. Because Traffic Server only writes  complete log entries to the pipe, only full records are dropped. </li>
+  <li>Any filters you want to use (i.e., previously-defined <code>LogFilter</code> objects). </li>
+  <li>The collation servers that are to receive the log files. </li>
+  <li>The protocols you want to log. If the protocols tag is used, then Traffic Server will only log transactions from the protocols listed; otherwise, all transactions for all protocols are logged. </li>
+  <li>The origin servers you want to log. If the <code>servers</code> tag is used, then Traffic Server will only log transactions for the origin servers listed; otherwise, transactions for all origin servers are logged. </li>
+  <li>The header text you want the log files to contain. The header text appears at the beginning of the log file, just before the first record. </li>
+  <li>The log file rolling options.</li>
+ </ul>
+</ul>
+<h5>To generate a custom log format: </h5>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the Traffic Server <code>config</code> directory. </li>
+  <li>Edit the following variables: </li>
+  <table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.custom_logs_enabled</i></code></td>
+      <td>Set this variable to 1 to enable custom logging.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.xml_logs_config</i></code></td>
+      <td>Make sure this variable is set to 1 (the default value).</td>
+  </tr>
+</table>
+  <li>Save and close the <code>records.config</code> file. </li>
+  <li>Open the <code>logs_xml.config</code> file located in the Traffic Server <code>config</code> directory. </li>
+  <li>Add <code>LogFormat</code>, <code>LogFilter</code>, and <code>LogObject</code> specifications to the configuration file. For detailed information about this file, see  <a href="files.htm#logs_xml.config">logs_xml.config</a>.</li>
+  <li>Save and close the <code>log</code><code>s_xml.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory.  </li>
+  <li>Run the command <code>traffic_line -x</code> to apply your configuration changes. </li>
+</ol>
+<h4>Creating Summary Log Files </h4>
+<p>Traffic Server performs several hundred operations per second; therefore, event log files can quickly grow  to large sizes. Using SQL-like aggregate operators, you can configure Traffic Server to create summary log files that summarize a set of log entries over a specified period of time. This can significantly reduce the size of the log files generated. </p>
+<p>To generate a summary log file, create a <code>LogFormat</code> object in the XML-based logging configuration file (<code>logs_xml.config</code>) using the  SQL-like aggregate operators below. You can apply each of these operators to specific fields, over a specified interval.</p>
+<ul>
+  <li><code>COUNT </code></li>
+  <li><code>SUM </code></li>
+  <li><code>AVERAGE </code></li>
+  <li><code>FIRST </code></li>
+  <li><code>LAST </code></li>
+</ul>
+<h5>To create a summary log file format: </h5>
+<ol>
+  <li>Access the <code>logs_xml.config</code> file  located in the Traffic Server <code>config</code> directory. </li>
+ <li>Define the format of the log file as follows:<br /><code>&lt;LogFormat&gt;<br />   &lt;Name = &quot;summary&quot;/&gt;<br />   &lt;Format = &quot;%&lt;<em>operator</em>(<em>field</em>)&gt; : %&lt;<em>operator</em>(<em>field</em>)&gt;&quot;/&gt;<br />   &lt;Interval = &quot;<em>n</em>&quot;/&gt;<br /> &lt;/Format&gt;<br />
+ </code>
+   where <em><code>operator</code></em> is one of the five aggregate operators (<code>COUNT</code>, <code>SUM</code>, <code>AVERAGE</code>, <code>FIRST</code>, <code>LAST</code>), <em><code>field</code></em> is the logging field  you want to aggregate, and <code>   <em>n</em></code> is the interval (in seconds) between summary log entries.  You can specify more than one <code><i>operator</i></code> in the format line. For more information, refer to <a href="files.htm#logs_xml.config">logs_xml.config</a>.<br />
+   <br />
+   The following example format generates one entry every 10 seconds. Each entry contains the timestamp of the last entry of the interval, a count of the number of entries seen within that 10-second interval, and the sum of all bytes sent to the client:
+   <br /><code>&lt;LogFormat&gt;<br />   &lt;Name = &quot;summary&quot;/&gt;<br />   &lt;Format = &quot;%&lt;LAST(cqts)&gt; : %&lt;COUNT(*)&gt; : %&lt;SUM(psql)&gt;&quot;/&gt;<br />   &lt;Interval = &quot;10&quot;/&gt;<br /> &lt;/Format&gt;</code><br />
+   <br />
+   <strong>IMPORTANT: </strong>You cannot create a format specification that contains both aggregate operators and regular fields. For example, the following specification would be invalid:
+   <br /><code>&lt;Format = &quot;%&lt;LAST(cqts)&gt; : %&lt;COUNT(*)&gt; : %&lt;SUM(psql)&gt; : %&lt;cqu&gt;&quot;/&gt;</code><br />
+ </li>
+ <li>Define a <code>LogObject</code> that uses this format. </li>
+ <li>Save your changes and close the <code>logs_xml.config</code> file. Run the command <code>traffic_line -x</code> from the Traffic Server <code>bin</code> directory to apply  configuration changes . </li>
+</ol>
+<h3 id="ChoosingBinaryASCII">Choosing Binary or ASCII </h3>
+<p>You can configure the Traffic Server to create event log files in either of the following: </p>
+<ul>
+  <li><b>ASCII</b><br />
+    These files are human-readable and can be processed using standard, off-the-shelf log analysis tools. However, Traffic Server must perform additional processing to create the files in ASCII, which  mildly impacts system overhead. ASCII files also tend to be larger than the equivalent binary files. By default, ASCII log files have a <code>.log</code> filename extension. </li>
+  <li><b>Binary</b><br />
+    These files generate lower system overhead and generally occupy less space on the disk than ASCII files (depending on the type of information being logged). However, you must use a converter application before you can read or analyze binary files via standard tools. By default, binary log files use a <code>.blog</code> filename extension. </li>
+</ul>
+<p>While binary log files typically require less disk space,  there are exceptions. <br/>
+  For example: the value <code>0</code> (zero) requires only one byte to store in ASCII, but requires four bytes when stored as a binary integer. Conversely: if you define a custom format that logs IP addresses, then a binary log file would only require four bytes of storage per 32-bit address. However, the same IP address stored in dot notation would require around 15 characters (bytes) in an ASCII log file.  Therefore, it's wise to consider the type of data that will be logged before you select ASCII or binary for your log files. For example, you might try logging for one day using ASCII and then another day using binary. If the number of requests is roughly the same for both days, then you can calculate a rough metric that compares the two formats. </p>
+<p>For standard log formats,  select Binary or ASCII (refer to <a href="#SettingStandardLogFileFormatOptions">Setting Standard Log File Format Options</a>). For the custom log format,  specify ASCII or Binary mode in the <code>LogObject</code> (refer to <a href="#UsingCustomFormat">Using the Custom Format</a>). In addition to the ASCII and binary options, you can also write custom log entries to a UNIX-named pipe (i.e., a buffer in memory). Other processes can then read the data using standard I/O functions. The advantage of using this option is that Traffic Server does not have to write to disk, which frees disk space and bandwidth for other tasks. In addition, writing to a pipe does not stop when logging space is exhausted because the pipe does not use disk space. Refer to <a href="files.htm#logs_xml.config">logs_xml.config</a> for more information about the <code>ASCII_PIPE</code> option.</p>
+<h3>Using logcat to Convert Binary Logs to ASCII </h3>
+<p>You must convert a binary log file to ASCII before you can analyze it using standard tools. </p>
+<h5>To convert a binary log file to ASCII: </h5>
+<ol>
+  <li>Navigate to the directory that contains the binary log file. </li>
+  <li>Make sure that the <code>logcat</code> utility is in your path. </li>
+  <li>Enter the following command: <pre>logcat <em>options input_filename</em>...</pre> 
+  The following table describes the command-line options. <br />
+  <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Option</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code>-o <i>output_file</i></code></td>
+      <td>Specifies where the command output is directed.</td>
+  </tr>
+   <tr>
+      <td><code>-a</code></td>
+      <td>Automatically generates the output filename based on the input filename. If the input is from stdin, then this option is ignored. For example:<br /><code>logcat -a squid-1.blog squid-2.blog squid-3.blog</code><br />generates<br /><code>squid-1.log, squid-2.log, squid-3.log</code></td>
+  </tr>
+   <tr>
+      <td><code>-S</code></td>
+      <td>Attempts to transform the input to Squid format, if possible.</td>
+  </tr>
+   <tr>
+      <td><code>-C</code></td>
+      <td>Attempts to transform the input to Netscape Common format, if possible.</td>
+  </tr>
+   <tr>
+      <td><code>-E</code></td>
+      <td>Attempts to transform the input to Netscape Extended format, if possible.</td>
+  </tr>
+   <tr>
+      <td><code>-2</code></td>
+      <td>Attempt to transform the input to Netscape Extended-2 format, if possible.</td>
+  </tr>
+</table>
+<br />
+  <strong>Note: </strong>Use only one of the following options at any given time: <code>-S</code>, <code>-C</code>, <code>-E</code>, or<code> -2</code>. <br />
+  If no input files are specified, then <code>logcat</code> reads from the standard input (<code>stdin</code>). If you do not specify an output file, then <code>logcat</code> writes to the standard output (<code>stdout</code>). <p> For example, to convert a binary log file to an ASCII file, you can use the <code>logcat</code> command with either of the following options below:
+    <pre>logcat binary_file &gt; ascii_file<br />logcat -o ascii_file binary_file</code></pre>
+
+The binary log file is not modified by this command. </li></ol>
+<h2><a name="RollingEventLogFiles"></a>Rolling Event Log Files</h2>
+<p>Traffic Server provides automatic log file rolling. This means that at specific intervals during the day or when log files reach a certain size, Traffic Server closes its current set of log files and opens new log files.  You should roll log files several times a day. Rolling every six hours is a good guideline to start with. </p>
+<p>Log file rolling offers the following benefits: </p>
+<ul>
+  <li>It defines an interval over which log analysis can be performed. </li>
+  <li>It keeps any single log file from becoming too large and helps to keep the logging system within the specified space limits. </li>
+  <li>It provides an easy way to identify files that are no longer being used so that an automated script can clean the logging directory and run log analysis programs.</li>
+</ul>
+<h3>Rolled Log Filename Format </h3>
+<p>Traffic Server provides a consistent naming scheme for rolled log files that enables you to easily identify log files.  When Traffic Server rolls a log file, it saves and closes the old file before it starts a new file. Traffic Server renames the old file to include the following information: </p>
+<ul>
+  <li>The format of the file (such as <code>squid.log</code>). </li>
+  <li>The hostname of the Traffic Server that generated the log file. </li>
+  <li>Two timestamps separated by a hyphen (-). The first timestamp is a <b>lower bound</b> for the timestamp of the first record in the log file. The lower bound is the time when the new buffer for log records is created. Under low load, the first timestamp in the filename can be different from the timestamp of the first entry. Under normal load, the first timestamp in the filename and the timestamp of the first entry are similar.  The second timestamp is an <b>upper bound </b>for the timestamp of the last record in the log file (this is normally the rolling time). </li>
+  <li>The suffix <code>.old</code>, which makes it easy for automated scripts to find rolled log files. </li>
+</ul>
+<p>Timestamps have the following format: <br /><code>%Y%M%D.%Hh%Mm%Ss-%Y%M%D.%Hh%Mm%Ss</code></p>
+<p>The following table describes the format: </p>
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Code</th>
+      <th width="894" scope="col">Description</th>
+  </tr>
+    <tr>
+      <td><code>%Y</code></td>
+      <td>The year in four-digit format. For example: 2000.</td>
+  </tr>
+  <tr>
+      <td><code>%M</code></td>
+      <td>The month in two-digit format, from 01-12. For example: 07.</td>
+  </tr>
+  <tr>
+      <td><code>%D</code></td>
+      <td>The day in two-digit format, from 01-31. For example: 19.</td>
+  </tr>
+  <tr>
+      <td><code>%H</code></td>
+      <td>The hour in two-digit format, from 00-23. For example: 21.</td>
+  </tr>
+  <tr>
+      <td><code>%M</code></td>
+      <td>The minute in two-digit format, from 00-59. For example: 52.</td>
+  </tr>
+  <tr>
+      <td><code>%S</code></td>
+      <td>The second in two-digit format, from 00-59. For example: 36.</td>
+  </tr>
+</table>
+<br />
+<p>The following is an example of a rolled log filename: <br /><code>squid.log.mymachine.20000912.12h00m00s-20000913.12h00m00s.old</code><br /> 
+<br /> 
+The logging system buffers log records before writing them to disk. When a log file is rolled, the log buffer might be partially full. If it is, then the first entry in the new log file will have a timestamp earlier than the time of rolling. When the new log file is rolled, its first timestamp will be a lower bound for the timestamp of the first entry. </p>
+<p>For example, suppose logs are rolled every three hours, and the first rolled log file is:<br />
+  <code>squid.log.mymachine.19980912.12h00m00s-19980912.03h00m00s.old</code><br />
+  <br />
+  If the lower bound for the first entry in the log buffer at 3:00:00 is 2:59:47, then the next log file will have the following timestamp when rolled:
+  <br />
+  <code>squid.log.mymachine.19980912.02h59m47s-19980912.06h00m00s.old</code></p>
+<p>The contents of a log file are always between the two timestamps. Log files do not contain overlapping entries, even if successive timestamps appear to overlap. </p>
+<h3>Rolling Intervals</h3>
+<p>Log files are rolled at specific intervals relative to a given hour of the day. Two options control when log files are rolled: </p>
+<ul>
+  <li>The offset hour, which is an hour between 0 (midnight) and 23 </li>
+  <li>The rolling interval </li>
+</ul>
+<p>Both the offset hour and the rolling interval determine when log file rolling starts. Rolling occurs every rolling interval and at the offset hour. For example, if the rolling interval is six hours and the offset hour is 0 (midnight), then the logs will roll at midnight (00:00), 06:00, 12:00, and 18:00 each day. If the rolling interval is 12 hours and the offset hour is 3, then logs will roll at 03:00 and 15:00 each day. </p>
+<h3>Setting Log File Rolling Options </h3>
+<p>To set log file rolling options and/or configure Traffic Server to roll log files when they reach a certain size, follow the steps below:</p>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+  <li>Edit the following variables: </li>
+  <table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.rolling_enabled</i></code></td>
+      <td>Set this variable to one of the following values:<br />
+      <code><b>1</b></code> to enable log file rolling at specific intervals during the day.<br />
+      <code><b>2</b></code> to enable log file rolling when log files reach a specific size.<br />
+      <code><b>3</b></code> to enable log file rolling at specific intervals during the day or when log files reach a specific size (whichever occurs first).<br />
+      <code><b>4</b></code> to enable log file rolling at specific intervals during the day when log files reach a specific size (at a specified time if the file is of the specified size).</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.rolling_size_mb</i></code></td>
+      <td>Specifies the size that log files must reach before rolling takes place.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.rolling_offset_hr</i></code></td>
+      <td>Set this variable to the specific time each day you want log file rolling to take place. Traffic Server forces the log file to be rolled at the offset hour each day.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.rolling_interval_sec</i></code></td>
+      <td>Set this variable to the rolling interval in seconds. The minimum value is 300 seconds (5 minutes). The maximum value is 86400 seconds (one day).<b> Note:</b> If you start Traffic Server within a few minutes of the next rolling time, then rolling might not occur until the next rolling time.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.auto_delete_rolled_file</i></code></td>
+      <td>Set this variable to 1 to enable autodeletion of rolled files.</td>
+  </tr>
+</table>
+  <li>Save and close the <code>records.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+  </ol>
+  <p>You can fine-tune log file rolling settings for a custom log file in the <code>LogObject</code> specification in the <code>logs_xml.config</code> file. The custom log file uses the rolling settings in its <code>LogObject</code>, which override the default settings you specify in Traffic Manager or the <code>records.config</code> file described above. </p>
+<h2 id="SplittingEventLogFiles">Splitting Event Log Files</h2>
+<p>By default, Traffic Server uses standard log formats and generates log files that contain HTTP &amp; ICP transactions  in the same file. However, you can enable log splitting if you prefer to log transactions for different protocols in separate log files. </p>
+<h3>ICP Log Splitting </h3>
+<p>When ICP log splitting is enabled, Traffic Server records ICP transactions in a separate log file with a name that contains <b><code>icp</code></b>. For example: if you enable the Squid format, then all ICP transactions are recorded in the <code>squid-icp.log</code> file.  When you disable ICP log splitting, Traffic Server records all ICP transactions in the same log file as HTTP transactions. </p>
+<h3><a name="HTTPHostLogSplitting"></a>HTTP Host Log Splitting </h3>
+<p>HTTP host log splitting enables you to record HTTP   transactions for different origin servers in separate log files. When HTTP host log splitting is enabled, Traffic Server creates a separate log file for each origin server that's listed in the<code> </code><a href="#EditingLogHostsConfigFile">log_hosts.config</a> file. When both ICP and HTTP host log splitting are  enabled, Traffic Server generates separate log files for HTTP transactions (based on the origin server) and places all ICP transactions in their own respective log files. For example, if the <code>log_hosts.config</code> file contains the two origin servers <code>uni.edu</code> and <code>company.com</code> and  Squid format is enabled, then Traffic Server generates the following log files: </p>
+<br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Log Filename</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code>squid-uni.edu.log</code></td>
+      <td>All HTTP   transactions for <code>uni.edu</code></td>
+  </tr>
+  <tr>
+      <td><code>squid-company.com.log</code></td>
+      <td>All HTTP   transactions for <code>company.com</code></td>
+  </tr>
+  <tr>
+      <td><code>squid-icp.log</code></td>
+      <td>All ICP transactions for all hosts</td>
+  </tr>
+  <tr>
+      <td><code>squid.log</code></td>
+      <td>All HTTP   transactions for other hosts</td>
+  </tr>
+</table>
+<br />
+<p>If you disable ICP log splitting, then ICP transactions are placed in the same log file as HTTP   transactions. Using the hosts and log format from  the previous example, Traffic Server generates the log files below:</p>
+<br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Log Filename</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code>squid-uni.edu.log</code></td>
+      <td>All entries for <code>uni.edu</code></td>
+  </tr>
+  <tr>
+      <td><code>squid-company.com.log</code></td>
+      <td>All entries for <code>company.com</code></td>
+  </tr>
+  <tr>
+      <td><code>squid.log</code></td>
+      <td>All other entries</td>
+  </tr>
+</table>
+<br />
+<p>Traffic Server also enables you to create XML-based <a href="#UsingCustomFormat">Custom Log Formats</a> that offer even greater control over log file generation. </p>
+<h3>Setting Log Splitting Options </h3>
+<p>To set log splitting options, follow the steps below:</p>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+  <li>Edit the following variables: </li>
+  <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.separate_icp_logs</i></code></td>
+      <td>Set this variable to 1 to record all ICP transactions in a separate log file. <br /> Set this variable to 0 to record all ICP transactions in the same log file as HTTP   transactions. <br /> Set this variable to -1 to filter all ICP transactions from the standard log files.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.separate_host_logs</i></code></td>
+      <td>Set this variable to 1 to record HTTP   transactions for each host listed in <code>log_hosts.config</code> file in a separate log file. <br />        Set this variable to 0 to record all HTTP transactions (for each host listed in  <code>log_hosts.config</code>) in the same log file.</td>
+  </tr>
+</table>
+<br />
+  <li>Save and close the <code>records.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h3 id="EditingLogHostsConfigFile">Editing the log_hosts.config File </h3>
+<p>The default <code>log_hosts.config</code> file is located in the Traffic Server <code>config</code> directory. To record HTTP transactions for different origin servers in separate log files, you must specify the hostname of each origin server on a separate line in the <code>log_hosts.config</code> file. For example, if you specify the keyword sports, then Traffic Server records all HTTP  transactions from <code>sports.yahoo.com</code> and <code>www.foxsports.com</code> in a log file called <code>squid-sports.log</code> (if the Squid format is enabled). </p>
+<p><strong>Note: </strong>If Traffic Server is clustered and you enable log file collation, then you should use the same <code>log_hosts.config</code> file on every Traffic Server node in the cluster.  </p>
+<h5>To edit the log_hosts.config file follow the steps below: </h5>
+<ol>
+  <li>In a text editor, open the <code>log_hosts.config</code> file located in the Traffic Server <code>config</code> directory.  </li>
+  <li>Enter the hostname of each origin server on a separate line in the file: for example, <br /><code>webserver1<br />webserver2<br />webserver3</code><br /> 
+  </li>
+  <li>Save and close the <code>log_hosts.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory.  </li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h2><a name="CollatingEventLogFiles"></a>
+Collating Event Log Files</h2>
+<p>You can use the Traffic Server log file collation feature to collect all logged information in one place. Log collation enables you to analyze a set of Traffic Server clustered nodes as a whole (rather than as individual nodes) and to use a large disk that might only be located on one of the nodes in the cluster. Traffic Server collates log files by using one or more nodes as log collation servers and all remaining nodes as log collation clients. When a Traffic Server node generates a buffer of event log entries, it first determines if it is the collation server or a collation client. The collation server node writes all log buffers to its local disk, just as it would if log collation was not enabled. Log collation servers can be standalone or they can be part of a node running Traffic Server.  </p>
+<p>The collation client nodes prepare their log buffers for transfer across the network and send the buffers to the log collation server. When the log collation server receives a log buffer from a client, it writes it to its own log file as if it was generated locally. For a visual representation of this, see  the figure below. </p>
+<p><img src="images/logcolat.jpg" width="870" height="505" /></p>
+<blockquote>
+  <p><em><b>Log collation </b></em></p>
+</blockquote>
+<p>If log clients cannot contact their log collation server, then they write their log buffers to their local disks, into <em>orphan</em> log files. Orphan log files require manual collation.  </p>
+<p><strong>Note: </strong>Log collation can have an impact on network performance. Because all nodes are forwarding their log data buffers to the single collation server, a bottleneck can occur.<strong> </strong>In addition, collated log files contain timestamp information for each entry, but entries in the files do not appear in strict chronological order. You may want to sort collated log files before doing analysis.  </p>
+<p>To configure Traffic Server to collate event log files, you must perform the following tasks: </p>
+<ul>
+  <li>Either <a href="#ConfiguringTrafficEdgeCollationServer">Configure Traffic Server Node to Be a Collation Server</a>  or install &amp; configure a  <a href="#UsingStandaloneCollator"> Standalone Collator</a>. </li>
+  <li><a href="#ConfiguringTrafficEdgeCollationClient">Configure Traffic Server Nodes to Be a Collation Clients</a>. </li>
+  <li>Add an attribute to the <code>LogObject</code> specification in the <code>logs_xml.config</code> file if you are using custom log file formats; refer to <a href="#CollatingCustomEventLogFiles">Collating Custom Event Log Files</a>. <br />
+  </li>
+</ul>
+<h3 id="ConfiguringTrafficEdgeCollationServer">Configuring Traffic Server to Be a Collation Server </h3>
+<p>To configure a Traffic Server node to be a collation server, simply edit a configuration file via the steps below. <strong> </strong>If you modify the <code><i>collation port</i></code> or <i><code>secret</code></i> after connections between the collation server and collation clients have been established, then you must restart Traffic Server. </p>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+ <li>Edit the following variables: </li>
+ <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.collation_mode</i></code></td>
+      <td>Set this variable to 1 to set this Traffic Server node as a log collation server.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.collation_port</i></code></td>
+      <td>Set this variable to specify the port number used for communication with collation clients. The default port number is 8085.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.collation_secret</i></code></td>
+      <td>Set this variable to specify the password used to validate logging data and prevent the exchange of arbitrary information.<br />All collation clients must use this same secret.</td>
+  </tr>
+</table>
+<br />
+ <li>Save and close the <code>records.config</code> file. </li>
+ <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+ <li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<h3 id="UsingStandaloneCollator">Using a Standalone Collator </h3>
+<p>If you do not want the log collation server to be a Traffic Server node, then you can install and configure a standalone collator (SAC) that will dedicate more of its power to collecting, processing, and writing log files.</p>
+<h5>To install and configure a standalone collator: </h5>
+<ol>
+  <li>Configure your Traffic Server nodes as log collation clients; refer to <a href="#ConfiguringTrafficEdgeCollationClient">Configuring Traffic Server to Be a Collation Client</a>. </li>
+  <li>Copy the <code>sac</code> binary from the Traffic Server <code>bin</code> directory to the machine serving as the standalone collator. </li>
+  <li>Create a directory called <code>config</code> in the directory that contains the <code>sac</code> binary. </li>
+  <li>Create a directory called <em><code>internal</code></em> in the <code>config</code> directory you created in Step 3 (above). This directory is used internally by the standalone collator to store lock files. </li>
+  <li>Copy the <code>records.config</code> file from a Traffic Server node configured to be a log collation client to the <code>config</code> directory you created in Step 3 on the standalone collator.  <br />
+  The <code>records.config</code> file contains the log collation secret and the port you specified when configuring Traffic Server nodes to be collation clients. The collation port and secret must be the same for all collation clients and servers.</li>
+  <li>In a text editor, open the <code>records.config</code> file on the standalone collator and edit the following variable: </li>
+  <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.logfile_dir</i></code></td>
+      <td>Set this variable to specify the directory on which you want to store the log files. You can specify an absolute path to the directory or a path relative to the directory from which the <code>sac<code> binary is executed.<br />Note: The directory must already exist on the machine serving as the standalone collator.</code></code></td>
+  </tr>
+</table>
+<br />
+ <li>Save and close the <code>records.config</code> file. </li>
+  <li>Enter the following command:<br /><code>sac -c config</code></li>
+</ol>
+<h3 id="ConfiguringTrafficEdgeCollationClient">Configuring Traffic Server to Be a Collation Client </h3>
+<p>To configure a Traffic Server node to be a collation client, follow the steps below. If you modify the <code><i>collation port</i></code> or <i><code>secret</code></i> after connections between the collation clients and the collation server have been established, then you must restart Traffic Server. </p>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+  <li>Edit the following variables: </li>
+  <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.collation_mode</i></code></td>
+      <td>Set this variable to 2 to configure this Traffic Server node to be a log collation client and send standard formatted log entries to the collation server.<br />To send custom XML-based formatted log entries to the collation server, you must add a log object specification to the <code>logs_xml.config</code> file; refer to <a href="#UsingCustomFormat">Using the Custom Format</a>.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.collation_host</i></code></td>
+      <td>Hostname of the collation server.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.collation_port</i></code></td>
+      <td>The port used for communication with the collation server. The default port number is 8085.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.collation_secret</i></code></td>
+      <td>The password used to validate logging data and prevent the exchange of arbitrary information.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.collation_host_tagged</i></code></td>
+      <td>Set this variable to 1 if you want the hostname of the collation client that generated the log entry to be included in each entry.<br />Set this variable to 0 if you do not want the hostname of the collation client that generated the log entry to be included in each entry.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.max_space_mb_for_orphan_logs</i></code></td>
+      <td>Set this variable to specify the maximum amount of space (in megabytes) you want to allocate to the logging directory on the collation client for storing orphan log files. Orphan log files are created when the log collation server cannot be contacted. The default value is 25 MB.</td>
+  </tr>
+</table>
+<br />
+  <li>Save and close the <code>records.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<h3 id="CollatingCustomEventLogFiles">Collating Custom Event Log Files </h3>
+<p>If you use custom event log files, then you must edit the <code>logs_xml.config</code> file (in addition to configuring a collation server and collation clients). </p>
+<h5>To collate custom event log files: </h5>
+<ol>
+  <li>On each collation client, open the <code>logs_xml.config</code> file in a text editor (located in the Traffic Server <code>config</code> directory). </li>
+  <li>Add the <code>CollationHosts</code> attribute to the <code>LogObject</code> specification, as shown below:<br /><code>&lt;LogObject&gt;<br />    &lt;Format = &quot;squid&quot;/&gt;<br />    &lt;Filename = &quot;squid&quot;/&gt;<br />    &lt;CollationHosts=&quot;<em>ipaddress</em>:<em>port</em>&quot;/&gt;<br /> &lt;/LogObject&gt;</code><br />
+    where <em><code>ipaddress</code></em> is the hostname or IP address of the collation server to which all log entries (for this object) are forwarded, and <i><code>port</code></i> is the port number for communication between the collation server  and collation clients. <br />
+</li>
+  <li>Save and close the <code>logs_xml.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory.</li>
+  <li>Run the command <code>traffic_line -L</code> to restart Traffic Server on the local node or <code>traffic_line -M</code> to restart Traffic Server on all the nodes in a cluster.</li>
+</ol>
+<h2 id="ViewingLoggingStatistics">Viewing Logging Statistics</h2>
+<p>Traffic Server generates logging statistics that enable you to see the following information: </p>
+<ul>
+  <li>How many log files (formats) are currently being written. </li>
+  <li>The current amount of space  used by the logging directory, which contains all   event and error logs. </li>
+  <li>The number of access events written to log files since Traffic Server installation. This counter represents one entry in one file; if multiple formats are being written, then a single event creates multiple event log entries. </li>
+  <li>The number of access events skipped (because they were filtered) since Traffic Server installation. </li>
+  <li>The number of access events  written to the event error log since Traffic Server installation. </li>
+</ul>
+<p>You can retrieve the statistics via the Traffic Line command-line interface; refer to <a href="monitor.htm">Monitoring Traffic</a>. </p>
+<h2 id="ViewingLogFiles">Viewing Log Files</h2>
+<p>You can view the system, event, and error log files  Traffic Server creates. You can also delete a log file or copy it to your local systemif you have the correct user permissions. Traffic Server displays only one MB of information in the log file. If the log file you select to view is bigger than 1MB, then Traffic Server truncates the file and displays a warning message indicating that the file is too big.</p>
+<h2 id="ExampleEventLogFileEntries">Example Event Log File Entries</h2>
+<p>This section shows an example log file entry in each of the standard log formats supported by Traffic Server: Squid, Netscape Common, Netscape Extended, and Netscape Extended-2. </p>
+<h3>Squid Format </h3>
+<p>The following figure shows a sample log entry in a <code>squid.log</code> file. </p>
+<p><img src="images/squid_format.jpg" width="884" height="149" /></p>
+<p>The following table describes each field. </p>
+<br />
+<table border="1">
+    <tr>
+      <th width="80" scope="col">Field</th>
+      <th width="80" scope="col">Symbol</th>
+      <th width="950" scope="col">Description</th>
+  </tr>
+    <tr>
+      <td>1</td>
+      <td>cqtq</td>
+      <td>The client request timestamp in Squid format; the time of the client request in seconds since January 1, 1970 UTC (with millisecond resolution). </td>
+  </tr>
+     <tr>
+      <td>2</td>
+      <td>ttms</td>
+      <td>The time Traffic Server spent processing the client request; the number of milliseconds between the time  the client established the connection with Traffic Server and the time  Traffic Server sent the last byte of the response back to the client.</td>
+  </tr>
+     <tr>
+      <td>3</td>
+      <td>chi</td>
+      <td>The IP address of the client’s host machine. </td>
+  </tr>
+     <tr>
+      <td>4</td>
+      <td>crc/pssc</td>
+      <td>The cache result code; how the cache responded to the request: <code>HIT</code>, <code>MISS</code>, and so on. Cache result codes are described <a href="trouble.htm#0_21826">here</a>.<br />        The proxy response status code (the HTTP response status code from Traffic Server to client). </td>
+  </tr>
+     <tr>
+      <td>5</td>
+      <td>psql</td>
+      <td>The length of the Traffic Server response to the client in bytes, including headers and content.</td>
+  </tr>
+     <tr>
+      <td>6</td>
+      <td>cqhm</td>
+      <td>The client request method: <code>GET</code>, <code>POST</code>, and so on.</td>
+  </tr>
+     <tr>
+      <td>7</td>
+      <td>cquc</td>
+      <td>The client request canonical URL; blanks and other characters that might not be parsed by log analysis tools are replaced by escape sequences. The escape sequence is a percentage sign followed by the ASCII code number of the replaced character in hex.</td>
+  </tr>
+     <tr>
+      <td>8</td>
+      <td>caun</td>
+      <td>The username of the authenticated client. A hyphen (-) means that no authentication was required.</td>
+  </tr>
+     <tr>
+      <td>9</td>
+      <td>phr/pqsn</td>
+      <td>The proxy hierarchy route; the route Traffic Server used to retrieve the object.<br />
+      The proxy request server name; the name of the server that fulfilled the request. If the request was a cache hit, then this field contains a hyphen (-).</td>
+  </tr>
+     <tr>
+      <td>10</td>
+      <td>psct</td>
+      <td>The proxy response content type; the object content type taken from the Traffic Server response header.</td>
+  </tr>
+</table>
+<br />
+<h3>Netscape Common </h3>
+<p>The following figure shows a sample log entry in a <code>common.log</code> file. </p>
+<p><img src="images/netscape_extended_format.jpg" width="1077" height="176" /></p>
+<h3>Netscape Extended </h3>
+<p>The following figure shows a sample log entry in an <code>extended.log</code> file. </p>
+<p><img src="images/netscape_extended2_format.jpg" width="1095" height="182" /></p>
+<h3>Netscape Extended-2 </h3>
+<p>The following figure shows a sample log entry in an <code>extended2.log</code> file. The following table describes each field. </p>
+<br />
+<table border="1">
+    <tr>
+      <th width="80" scope="col">Field</th>
+      <th width="80" scope="col">Symbol</th>
+      <th width="950" scope="col">Description</th>
+  </tr>
+   <tr>
+      <td>&nbsp;</td>
+      <td>&nbsp;</td>
+      <td><strong>Netscape Common</strong></td>
+  </tr>
+    <tr>
+      <td>1</td>
+      <td>chi</td>
+      <td>The IP address of the client’s host machine.</td>
+  </tr>
+     <tr>
+      <td>2</td>
+      <td>&nbsp;</td>
+      <td>This hyphen (-) is always present in Netscape log entries. </td>
+  </tr>
+     <tr>
+      <td>3</td>
+      <td>caun</td>
+      <td>The authenticated client username. A hyphen (-) means no authentication was required.</td>
+  </tr>
+     <tr>
+      <td>4</td>
+      <td>cqtd</td>
+      <td>The date and time of the client request, enclosed in brackets.</td>
+  </tr>
+     <tr>
+      <td>5</td>
+      <td>cqtx</td>
+      <td>The request line, enclosed in quotes.</td>
+  </tr>
+      <tr>
+      <td>6</td>
+      <td>pssc</td>
+      <td>The proxy response status code (HTTP reply code).</td>
+  </tr>
+     <tr>
+      <td>7</td>
+      <td>pscl</td>
+      <td>The length of the Traffic Server response to the client in bytes.</td>
+  </tr>
+   <tr>
+      <td>&nbsp;</td>
+      <td>&nbsp;</td>
+      <td><strong>Netscape Extended</strong></td>
+  </tr>
+       <tr>
+      <td>8</td>
+      <td>sssc</td>
+      <td>The origin server response status code.</td>
+  </tr>
+     <tr>
+      <td>9</td>
+      <td>sshl</td>
+      <td>The server response transfer length; the body length in the origin server response to Traffic Server, in bytes.</td>
+  </tr>
+     <tr>
+      <td>10</td>
+      <td>cqbl</td>
+      <td>The client request transfer length; the body length in the client request to Traffic Server, in bytes.</td>
+  </tr>
+       <tr>
+      <td>11</td>
+      <td>pqbl</td>
+      <td>The proxy request transfer length; the body length in the Traffic Server request to the origin server. </td>
+  </tr>
+       <tr>
+      <td>12</td>
+      <td>cqhl</td>
+      <td>The client request header length; the header length in the client request to Traffic Server. </td>
+  </tr>
+       <tr>
+      <td>13</td>
+      <td>pshl</td>
+      <td>The proxy response header length; the header length in the Traffic Server response to the client.</td>
+  </tr>
+       <tr>
+      <td>14</td>
+      <td>pqhl</td>
+      <td>The proxy request header length; the header length in Traffic Server request to the origin server. </td>
+  </tr>
+       <tr>
+      <td>15</td>
+      <td>sshl</td>
+      <td>The server response header length; the header length in the origin server response to Traffic Server. </td>
+  </tr>
+       <tr>
+      <td>16</td>
+      <td>tts</td>
+      <td>The time Traffic Server spent processing the client request; the number of seconds between the time that the client established the connection with Traffic Server and the time that Traffic Server sent the last byte of the response back to the client.</td>
+  </tr>
+    <tr>
+      <td>&nbsp;</td>
+      <td>&nbsp;</td>
+      <td><strong>Netscape Extended2</strong></td>
+  </tr>
+        <tr>
+      <td>17</td>
+      <td>phr</td>
+      <td>The proxy hierarchy route; the route Traffic Server used to retrieve the object.</td>
+  </tr>
+       <tr>
+      <td>18</td>
+      <td>cfsc</td>
+      <td>The client finish status code: <code>FIN</code> if the client request completed successfully or <code>INTR</code> if the client request was interrupted.</td>
+  </tr>
+       <tr>
+      <td>19</td>
+      <td>pfsc</td>
+      <td>The proxy finish status code: <code>FIN</code> if the Traffic Server request to the origin server completed successfully or <code>INTR</code> if the request was interrupted.</td>
+  </tr>
+       <tr>
+      <td>20</td>
+      <td>crc</td>
+      <td>The cache result code; how the Traffic Server cache responded to the request: HIT, MISS, and so on. Cache result codes are described <a href="trouble.htm#0_21826">here</a>.</td>
+  </tr>
+</table>
+<p>&nbsp;</p>
+<h2 id="SupportTraditionalCustomLogging">Support for Traditional Custom Logging</h2>
+<p>Traffic Server supports traditional custom logging in addition to the XML-based custom logging, which is more versatile and therefore recommended.</p>
+<p>Traffic Server's  format converter only converts traditional log configuration files named <code>logs.config</code>. If you are using a traditional log configuration file with a name other than <code>logs.config</code>, then you must convert the file yourself after installation; refer to <a href "#UsingCustLogFmtCnvrt">Using cust_log_fmt_cnvrt</a>. If you opt to use traditional custom logging instead of the more versatile XML-based custom logging, then you must enable the traditional custom logging option manually. Furthermore, if you want to configure Traffic Server as a collation client that sends log entries in traditional custom formats, then you must set collation options manually. Use the following procedures. </p>
+<h3>Enabling Traditional Custom Logging </h3>
+<p>To enable custom logging, you must edit a configuration file manually. To edit your existing traditional custom log formats, modify the <a href="files.htm#logs.config">logs.config</a> file as before.</p>
+<h5>To enable traditional custom logging: </h5>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+  <li>Edit the following variables: </li>
+  <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.custom_logs_enabled	</i></code></td>
+      <td>Set this variable to 1 to enable custom logging.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.xml_logs_config</i></code></td>
+      <td>Set this variable to 0 to disable XML-based custom logging.</td>
+  </tr>
+</table>
+<br />
+  <li>Save and close the <code>records.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes.</li>
+</ol>
+<p>To configure your Traffic Server node to be a collation client and send traditional custom log files to the collation server, use the following procedure. </p>
+<h5>To configure Traffic Server as a collation client: </h5>
+<ol>
+  <li>In a text editor, open the <code>records.config</code> file located in the <code>config</code> directory. </li>
+  <li>Edit the following variables: </li>
+    <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Variable</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code><i>proxy.config.log.collation_mode</i></code></td>
+      <td>Set this variable to 3 to configure this Traffic Server node to be a log collation client and send log entries in  traditional custom formats to the collation server.<br />        Set this variable to 4 to configure this Traffic Server node to be a log collation client and send log entries in both  standard formats (Squid, Netscape) and  traditional custom formats to the collation server.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.collation_host</i></code></td>
+      <td>Specify the hostname of the collation server.</td>
+  </tr>
+  <tr>
+      <td><code><i>proxy.config.log.collation_port</i></code></td>
+      <td>Specify the port Traffic Server uses to communicate with the collation server. The default port number is 8085.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.collation_secret</i></code></td>
+      <td>Specify the password used to validate logging data and prevent  exchange of arbitrary information.</td>
+  </tr>
+   <tr>
+      <td><code><i>proxy.config.log.collation_host_tagged</i></code></td>
+      <td>Set this variable to 1 if you want the hostname of the collation client that generated the log entry to be included in each entry.<br />Set this variable to 0 if you do not want the hostname of the collation client that generated the log entry to be included in each entry. </td>
+  </tr>
+</table>
+<br />
+  <li>Save and close the <code>records.config</code> file. </li>
+  <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+  <li>Run the command <code>traffic_line -x</code> to apply the configuration changes. </li>
+</ol>
+<h3 id="UsingCustLogFmtCnvrt">Using cust_log_fmt_cnvrt </h3>
+<p>The format converter <code>cust_log_fmt_cnvrt</code> converts your traditional custom log configuration file (<code>logs.config</code>) to an XML-based custom log configuration file (<code>logs_xml.config</code>). This enables you to use XML-based custom logging. </p>
+<h5>To run the format converter: </h5>
+<ol>
+  <li>Navigate to the Traffic Server <code>bin</code> directory. </li>
+  <li>Enter the command <code>cust_log_fmt_cnvrt</code> and include the options you want to use. <br />
+  The format of the command is <br /><code>cust_log_fmt_cnvrt [-o output_file | -a] [-hnVw] [input_file..]</code><br />
+  <br /> 
+  The following table describes the command-line options. </li>
+  <br />
+<table width="1232" border="1">
+    <tr>
+      <th width="322" scope="col">Option</th>
+      <th width="894" scope="col">Description</th>
+    </tr>
+    <tr>
+      <td><code>-o <em>output_file</em></code></td>
+      <td>Specifies the name of the output file; you can specify one output file only. If you specify multiple input files, then the converter combines the converted output from all  files into a single output file. <br />
+        This option and the<b> <code>-a</code></b> option are mutually exclusive. If you want to create multiple output files from multiple input files, then you must use the<b> <code>-a</code></b> option. If you do not specify an output file (using the <code><b>-o</b></code> or <code><b>-a</b></code> options), then output goes to <code>stdout</code>.</td>
+  </tr>
+     <tr>
+      <td><code>-a</code></td>
+      <td>Generates one output file for each input file. The format converter automatically creates the name of the output file  from the name of the input file by replacing <code>.config</code> at the end of the filename with <code>_xml.config</code>.<br /> 
+        <b>Note:</b> If the source filename does not contain a <code>.config</code> extension, then the converter creates the new filename by appending <code>_xml.config</code> to the source filename.</td>
+  </tr>
+     <tr>
+      <td><code>-h</code></td>
+      <td>Displays a description of the <code>cust_log_fmt_cnvrt</code> options.</td>
+  </tr>
+     <tr>
+      <td><code>-n</code></td>
+      <td>Annotates the output file(s) with comments about the success or failure of the translation process for each of the input lines. This option produces a comment at the beginning of the output file(s) that describes  errors  the format converter encountered while converting the file. The comment includes  line number,  input line type (format, filter, or unknown), and either a success status or a description of the error encountered.</td>
+  </tr>
+     <tr>
+      <td><code>-V</code></td>
+      <td>Displays the version of the format converter you are running.</td>
+  </tr>
+     <tr>
+      <td><code>-w</code></td>
+      <td>Overwrites existing output files without warning.  If you do not specify the<b><code> -w</code></b> option, then the format converter does not overwrite existing output files. If you specify an output file that already exists, then the converter does not convert the input file.</td>
+  </tr>
+     <tr>
+      <td><em><code>input file</code></em></td>
+      <td>Specifies the name of the input file. If you do not specify an input filename, then the format converter takes the input from <code>stdin</code>.</td>
+  </tr>
+</table>
+<br />
+</ol>
+<h4>Examples </h4>
+<p>The following example converts the file <code>logs.config</code> and sends the results to <code>stdout</code>: <br />
+<code>cust_log_fmt_cnvrt logs.config</code> <br /> <br />
+The following example converts a <code>logs.config</code> file into a <code>logs_xml.config</code> file and annotates the output file (<code>logs_xml.config</code>) with comments about the success or failure of the translation process. If a file named <code>logs_xml.config</code> already exists, then the format converter overwrites it. <br /><code>cust_log_fmt_cnvrt -o logs_xml.config -n -w logs.config</code><br /> <br />The following example converts the files <code>x.config</code>, <code>y.config</code>, and <code>z.config</code> into three separate output files called <code>x_xml.config</code>, <code>y_xml.config</code>, and <code>z_xml.config</code>: <br /><code>cust_log_fmt_cnvrt -a x.config y.config z.config </code> <br /> 
+</p>
+
+<!--#include file="bottom.html" -->

Propchange: websites/staging/trafficserver/trunk/content/docs/v2/admin/log.htm
------------------------------------------------------------------------------
    svn:executable = *