You are viewing a plain text version of this content. The canonical link for it is here.
Posted to codereview@trafodion.apache.org by DaveBirdsall <gi...@git.apache.org> on 2018/02/01 00:51:31 UTC

[GitHub] trafodion pull request #1429: [TRAFODION-2927] Refactor and improve UPDATE S...

GitHub user DaveBirdsall opened a pull request:

    https://github.com/apache/trafodion/pull/1429

    [TRAFODION-2927] Refactor and improve UPDATE STATISTICS logging

    This set of changes achieves two things:
    
    1. It provides a mechanism for UPDATE STATISTICS to capture log information for each command, and retain that log in the event of a severe error. (Otherwise the log is deleted.)
    
    2. It refactors the UPDATE STATISTICS logging facility to use the QRLogger/CommonLogger/log4cxx mechanism used by the rest of the SQL engine.
    
    Externals:
    
    The UPDATE STATISTICS LOG statement has been extended as follows:
    
    UPDATE STATISTICS LOG ON -- Starts continuous logging of all UPDATE STATISTICS and SHOWSTATS activity into one log file. Logging continues until the session has ended or a different UPDATE STATISTICS LOG command is entered. This is the same behavior as today, except the log file will be written to the $TRAF_HOME/logs directory like other SQL log files.
    
    UPDATE STATISTICS LOG OFF -- Stops logging. Nothing further is logged until a different UPDATE STATISTICS LOG command is entered. This is the same behavior as today.
    
    UPDATE STATISTICS LOG SYSTEM -- The "SYSTEM" option is new. When specified, each UPDATE STATISTICS statement (but not SHOWSTATS) is logged to a separate log file. If a severe error occurs (e.g. an internal error), the log file is retained. Otherwise it is deleted at the end of the UPDATE STATISTICS command.
    
    A CQD has been added. CQD USTAT_AUTOMATIC_LOGGING, having values of 'ON' and 'OFF'. If 'ON', then UPDATE STATISTICS LOG SYSTEM semantics are turned on by default. If 'OFF', then UPDATE STATISTICS LOG OFF semantics are turned on by default. (Note that no equivalent of UPDATE STATISTICS LOG ON is provided as that is remarkably noisy; if one really wants to log continuously, then one must use UPDATE STATISTICS LOG ON directly.) 
    
    The default for this CQD is 'OFF'. So by default no automatic logging is done. On large systems (e.g. scalability testing, or production systems with large tables), it is recommended to set this CQD to 'ON' in the system defaults table. In this way, logs will be captured for long-running UPDATE STATISTICS operations that fail with severe errors.
    
    In terms of design: The existing QRLogger/CommonLogger routines assume a model of one log file per category, which doesn't meet the needs for UPDATE STATISTICS. So, QRLogger has been extended to add methods to explicitly configure log files on a per process, even per command basis. For such files, we don't bother logging the process ID and node number in the individual log messages, as all of the messages are coming from one process. Instead, UPDATE STATISTICS puts the node number, PID and Linux timestamp of file creation into the log file name itself.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/DaveBirdsall/trafodion Trafodion2927

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/trafodion/pull/1429.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1429
    
----
commit f473c0cd5a1d895dd8a40d2ce2ec24c12a5b076d
Author: Dave Birdsall <db...@...>
Date:   2018-02-01T00:37:49Z

    [TRAFODION-2927] Refactor and improve UPDATE STATISTICS logging

----


---

[GitHub] trafodion pull request #1429: [TRAFODION-2927] Refactor and improve UPDATE S...

Posted by zellerh <gi...@git.apache.org>.
Github user zellerh commented on a diff in the pull request:

    https://github.com/apache/trafodion/pull/1429#discussion_r165237344
  
    --- Diff: core/sql/ustat/hs_log.cpp ---
    @@ -389,44 +371,64 @@ NABoolean HSLogMan::GetLogFile(NAString & path, const char* cqd_value)
     /*          until either StartLog() or         */
     /*          ClearLog() methods are called.     */
     /***********************************************/
    -void HSLogMan::StartLog(NABoolean needExplain, const char* logFileName)
    +void HSLogMan::StartLog(NABoolean needExplain)
       {
    -    // The GENERATE_EXPLAIN cqd captures explain data pertaining to dynamic
    -    // queries. Ordinarily we want it on, but for just-in-time logging triggered
    -    // by an error, we don't need it, and can't set it because HSFuncExecQuery
    -    // clears the diagnostics area, which causes the error to be lost.
    -    explainOn_ = needExplain;
    -    if (!needExplain ||
    -        HSFuncExecQuery("CONTROL QUERY DEFAULT GENERATE_EXPLAIN 'ON'") == 0)
    +    if (!logNeeded_)  // if logging isn't already on
           {
    -        CollIndex activeNodes = gpClusterInfo->numOfSMPs();
    -        if (logFileName)
    -        {
    -          *logFile_ = logFileName;
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    -        else if(activeNodes > 2)
    -        {//we consider we are running on cluster 
    -         //if gpClusterInfo->numOfSMPs() > 2
    -           NABoolean ret = FALSE;
    -           if(GetLogFile(*logFile_, ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG)))
    -             ret = ContainDirExist(logFile_->data());
    -
    -           if(ret)
    -             logNeeded_ = TRUE;
    -
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -        }
    +        // Construct logfile name incorporating process id and node number. Note that
    +        // the 2nd parameter of processhandle_decompose is named cpu but is actually
    +        // the node number for Seaquest (the 4th param, named nodenumber, is the cluster
    +        // number).
    +        Int32 nodeNum;
    +        Int32 pin;
    +        SB_Phandle_Type procHandle;
    +        XPROCESSHANDLE_GETMINE_(&procHandle);
    +        XPROCESSHANDLE_DECOMPOSE_(&procHandle, &nodeNum, &pin);
    +        long currentTime = (long)time(0);
    +
    +        const size_t MAX_FILENAME_SIZE = 50;
    +        char qualifiers[MAX_FILENAME_SIZE];
    +        sprintf(qualifiers, ".%d.%d.%ld.txt", nodeNum, pin, currentTime);
    +   
    +        std::string logFileName;
    +        QRLogger::getRootLogDirectory(CAT_SQL_USTAT, logFileName /* out */);
    +        if (logFileName.size() > 0)
    +          logFileName += '/';
    +
    +        const char * ustatLog = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    +        const char * fileNameStem = ustatLog + strlen(ustatLog);
    +        if (ustatLog == fileNameStem) 
    +          fileNameStem = "ULOG";  // CQD USTAT_LOG is the empty string
             else
    -        {
    -          *logFile_ = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    +          {
    +            // strip off any directory path name; we will always use the logs directory
    +            // as configured via QRLogger
    +            while ((fileNameStem > ustatLog) && (*(fileNameStem - 1) != '/'))
    +              fileNameStem--;
    +          }
    +
    +        logFileName += fileNameStem;
    +        logFileName += qualifiers;
    +
    +        NABoolean logStarted = QRLogger::startLogFile(CAT_SQL_USTAT,logFileName.c_str());
    --- End diff --
    
    This could allow a user to create a file anywhere the trafodion user has write access. For UDFs, we are creating a sandbox in $TRAF_HOME/udr for different types of users, beginning with a public area $TRAF_HOME/udr/public. I wonder whether it would be better here to sandbox the log files as well, e.g. into something like $TRAF_HOME/logs/ustat_logs. In the UDF code we also try to catch names like a/../../../export/lib/myfile that would escape the sandbox.
    
    A related comment/question (a non-issue), maybe I can bring it up here as well, as it's related to the choice of log file directory: As far as I understand, these log files don't conform to the format we are using for executor, TM, DCS, etc. Therefore they can't be read by the event_log_reader UDF and usually they shouldn't be, unless they get placed into one of the directories where this UDF looks (e.g. $TRAF_HOME/logs) and they have a name prefix that matches the list supported by this UDF (e.g. "master_exec_" or "mon"). So, there should be no issue. To be safe, I would recommend keeping the update stats log files in a separate directory (maybe a subdirectory), so that the event_log_reader UDF doesn't have to look at them each time it is called. Also, this avoids errors for all users if somebody accidentally chooses one of the special prefixes like "mon" for their ustat log files.


---

[GitHub] trafodion pull request #1429: [TRAFODION-2927] Refactor and improve UPDATE S...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/trafodion/pull/1429


---

[GitHub] trafodion pull request #1429: [TRAFODION-2927] Refactor and improve UPDATE S...

Posted by zellerh <gi...@git.apache.org>.
Github user zellerh commented on a diff in the pull request:

    https://github.com/apache/trafodion/pull/1429#discussion_r165242893
  
    --- Diff: core/sql/ustat/hs_log.cpp ---
    @@ -389,44 +371,64 @@ NABoolean HSLogMan::GetLogFile(NAString & path, const char* cqd_value)
     /*          until either StartLog() or         */
     /*          ClearLog() methods are called.     */
     /***********************************************/
    -void HSLogMan::StartLog(NABoolean needExplain, const char* logFileName)
    +void HSLogMan::StartLog(NABoolean needExplain)
       {
    -    // The GENERATE_EXPLAIN cqd captures explain data pertaining to dynamic
    -    // queries. Ordinarily we want it on, but for just-in-time logging triggered
    -    // by an error, we don't need it, and can't set it because HSFuncExecQuery
    -    // clears the diagnostics area, which causes the error to be lost.
    -    explainOn_ = needExplain;
    -    if (!needExplain ||
    -        HSFuncExecQuery("CONTROL QUERY DEFAULT GENERATE_EXPLAIN 'ON'") == 0)
    +    if (!logNeeded_)  // if logging isn't already on
           {
    -        CollIndex activeNodes = gpClusterInfo->numOfSMPs();
    -        if (logFileName)
    -        {
    -          *logFile_ = logFileName;
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    -        else if(activeNodes > 2)
    -        {//we consider we are running on cluster 
    -         //if gpClusterInfo->numOfSMPs() > 2
    -           NABoolean ret = FALSE;
    -           if(GetLogFile(*logFile_, ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG)))
    -             ret = ContainDirExist(logFile_->data());
    -
    -           if(ret)
    -             logNeeded_ = TRUE;
    -
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -        }
    +        // Construct logfile name incorporating process id and node number. Note that
    +        // the 2nd parameter of processhandle_decompose is named cpu but is actually
    +        // the node number for Seaquest (the 4th param, named nodenumber, is the cluster
    +        // number).
    +        Int32 nodeNum;
    +        Int32 pin;
    +        SB_Phandle_Type procHandle;
    +        XPROCESSHANDLE_GETMINE_(&procHandle);
    +        XPROCESSHANDLE_DECOMPOSE_(&procHandle, &nodeNum, &pin);
    +        long currentTime = (long)time(0);
    +
    +        const size_t MAX_FILENAME_SIZE = 50;
    +        char qualifiers[MAX_FILENAME_SIZE];
    +        sprintf(qualifiers, ".%d.%d.%ld.txt", nodeNum, pin, currentTime);
    +   
    +        std::string logFileName;
    +        QRLogger::getRootLogDirectory(CAT_SQL_USTAT, logFileName /* out */);
    +        if (logFileName.size() > 0)
    +          logFileName += '/';
    +
    +        const char * ustatLog = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    +        const char * fileNameStem = ustatLog + strlen(ustatLog);
    +        if (ustatLog == fileNameStem) 
    +          fileNameStem = "ULOG";  // CQD USTAT_LOG is the empty string
             else
    -        {
    -          *logFile_ = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    +          {
    +            // strip off any directory path name; we will always use the logs directory
    +            // as configured via QRLogger
    +            while ((fileNameStem > ustatLog) && (*(fileNameStem - 1) != '/'))
    +              fileNameStem--;
    +          }
    +
    +        logFileName += fileNameStem;
    +        logFileName += qualifiers;
    +
    +        NABoolean logStarted = QRLogger::startLogFile(CAT_SQL_USTAT,logFileName.c_str());
    --- End diff --
    
    Sorry, I was thinking that QRLogger::getRootLogDirectory() might return an empty string and thought that lines 406 and 407 only remove the last name part of the USTAT_LOG directory.
    
    Would this be another way to overwrite an arbitrary file: CQD USTAT_LOG '../../export/lib'?


---

[GitHub] trafodion pull request #1429: [TRAFODION-2927] Refactor and improve UPDATE S...

Posted by DaveBirdsall <gi...@git.apache.org>.
Github user DaveBirdsall commented on a diff in the pull request:

    https://github.com/apache/trafodion/pull/1429#discussion_r165241858
  
    --- Diff: core/sql/ustat/hs_log.cpp ---
    @@ -389,44 +371,64 @@ NABoolean HSLogMan::GetLogFile(NAString & path, const char* cqd_value)
     /*          until either StartLog() or         */
     /*          ClearLog() methods are called.     */
     /***********************************************/
    -void HSLogMan::StartLog(NABoolean needExplain, const char* logFileName)
    +void HSLogMan::StartLog(NABoolean needExplain)
       {
    -    // The GENERATE_EXPLAIN cqd captures explain data pertaining to dynamic
    -    // queries. Ordinarily we want it on, but for just-in-time logging triggered
    -    // by an error, we don't need it, and can't set it because HSFuncExecQuery
    -    // clears the diagnostics area, which causes the error to be lost.
    -    explainOn_ = needExplain;
    -    if (!needExplain ||
    -        HSFuncExecQuery("CONTROL QUERY DEFAULT GENERATE_EXPLAIN 'ON'") == 0)
    +    if (!logNeeded_)  // if logging isn't already on
           {
    -        CollIndex activeNodes = gpClusterInfo->numOfSMPs();
    -        if (logFileName)
    -        {
    -          *logFile_ = logFileName;
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    -        else if(activeNodes > 2)
    -        {//we consider we are running on cluster 
    -         //if gpClusterInfo->numOfSMPs() > 2
    -           NABoolean ret = FALSE;
    -           if(GetLogFile(*logFile_, ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG)))
    -             ret = ContainDirExist(logFile_->data());
    -
    -           if(ret)
    -             logNeeded_ = TRUE;
    -
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -        }
    +        // Construct logfile name incorporating process id and node number. Note that
    +        // the 2nd parameter of processhandle_decompose is named cpu but is actually
    +        // the node number for Seaquest (the 4th param, named nodenumber, is the cluster
    +        // number).
    +        Int32 nodeNum;
    +        Int32 pin;
    +        SB_Phandle_Type procHandle;
    +        XPROCESSHANDLE_GETMINE_(&procHandle);
    +        XPROCESSHANDLE_DECOMPOSE_(&procHandle, &nodeNum, &pin);
    +        long currentTime = (long)time(0);
    +
    +        const size_t MAX_FILENAME_SIZE = 50;
    +        char qualifiers[MAX_FILENAME_SIZE];
    +        sprintf(qualifiers, ".%d.%d.%ld.txt", nodeNum, pin, currentTime);
    +   
    +        std::string logFileName;
    +        QRLogger::getRootLogDirectory(CAT_SQL_USTAT, logFileName /* out */);
    +        if (logFileName.size() > 0)
    +          logFileName += '/';
    +
    +        const char * ustatLog = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    +        const char * fileNameStem = ustatLog + strlen(ustatLog);
    +        if (ustatLog == fileNameStem) 
    +          fileNameStem = "ULOG";  // CQD USTAT_LOG is the empty string
             else
    -        {
    -          *logFile_ = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    +          {
    +            // strip off any directory path name; we will always use the logs directory
    +            // as configured via QRLogger
    +            while ((fileNameStem > ustatLog) && (*(fileNameStem - 1) != '/'))
    +              fileNameStem--;
    +          }
    +
    +        logFileName += fileNameStem;
    +        logFileName += qualifiers;
    +
    +        NABoolean logStarted = QRLogger::startLogFile(CAT_SQL_USTAT,logFileName.c_str());
    --- End diff --
    
    Re: your first paragraph. The code at line 394 gets the directory of the SQL catagory (which is presently configured as $TRAF_HOME/logs). The code from lines 398-408 strips off any directory path that a user might have specified in CQD USTAT_LOG. So, no matter what, files should always be created in $TRAF_HOME/logs. Regarding the idea of sandboxing in a sub-directory, I'll consider that; that would avoid the issue in your second paragraph.


---

[GitHub] trafodion pull request #1429: [TRAFODION-2927] Refactor and improve UPDATE S...

Posted by zellerh <gi...@git.apache.org>.
Github user zellerh commented on a diff in the pull request:

    https://github.com/apache/trafodion/pull/1429#discussion_r165243285
  
    --- Diff: core/sql/ustat/hs_log.cpp ---
    @@ -389,44 +371,64 @@ NABoolean HSLogMan::GetLogFile(NAString & path, const char* cqd_value)
     /*          until either StartLog() or         */
     /*          ClearLog() methods are called.     */
     /***********************************************/
    -void HSLogMan::StartLog(NABoolean needExplain, const char* logFileName)
    +void HSLogMan::StartLog(NABoolean needExplain)
       {
    -    // The GENERATE_EXPLAIN cqd captures explain data pertaining to dynamic
    -    // queries. Ordinarily we want it on, but for just-in-time logging triggered
    -    // by an error, we don't need it, and can't set it because HSFuncExecQuery
    -    // clears the diagnostics area, which causes the error to be lost.
    -    explainOn_ = needExplain;
    -    if (!needExplain ||
    -        HSFuncExecQuery("CONTROL QUERY DEFAULT GENERATE_EXPLAIN 'ON'") == 0)
    +    if (!logNeeded_)  // if logging isn't already on
           {
    -        CollIndex activeNodes = gpClusterInfo->numOfSMPs();
    -        if (logFileName)
    -        {
    -          *logFile_ = logFileName;
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    -        else if(activeNodes > 2)
    -        {//we consider we are running on cluster 
    -         //if gpClusterInfo->numOfSMPs() > 2
    -           NABoolean ret = FALSE;
    -           if(GetLogFile(*logFile_, ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG)))
    -             ret = ContainDirExist(logFile_->data());
    -
    -           if(ret)
    -             logNeeded_ = TRUE;
    -
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -        }
    +        // Construct logfile name incorporating process id and node number. Note that
    +        // the 2nd parameter of processhandle_decompose is named cpu but is actually
    +        // the node number for Seaquest (the 4th param, named nodenumber, is the cluster
    +        // number).
    +        Int32 nodeNum;
    +        Int32 pin;
    +        SB_Phandle_Type procHandle;
    +        XPROCESSHANDLE_GETMINE_(&procHandle);
    +        XPROCESSHANDLE_DECOMPOSE_(&procHandle, &nodeNum, &pin);
    +        long currentTime = (long)time(0);
    +
    +        const size_t MAX_FILENAME_SIZE = 50;
    +        char qualifiers[MAX_FILENAME_SIZE];
    +        sprintf(qualifiers, ".%d.%d.%ld.txt", nodeNum, pin, currentTime);
    +   
    +        std::string logFileName;
    +        QRLogger::getRootLogDirectory(CAT_SQL_USTAT, logFileName /* out */);
    +        if (logFileName.size() > 0)
    +          logFileName += '/';
    +
    +        const char * ustatLog = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    +        const char * fileNameStem = ustatLog + strlen(ustatLog);
    +        if (ustatLog == fileNameStem) 
    +          fileNameStem = "ULOG";  // CQD USTAT_LOG is the empty string
             else
    -        {
    -          *logFile_ = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    +          {
    +            // strip off any directory path name; we will always use the logs directory
    +            // as configured via QRLogger
    +            while ((fileNameStem > ustatLog) && (*(fileNameStem - 1) != '/'))
    +              fileNameStem--;
    +          }
    +
    +        logFileName += fileNameStem;
    +        logFileName += qualifiers;
    +
    +        NABoolean logStarted = QRLogger::startLogFile(CAT_SQL_USTAT,logFileName.c_str());
    --- End diff --
    
    Sorry, never mind, my apologies for being slow, I realize my mistake now. I was looking at the part that got removed instead of the part that gets retained in fileNameStem.


---

[GitHub] trafodion pull request #1429: [TRAFODION-2927] Refactor and improve UPDATE S...

Posted by zellerh <gi...@git.apache.org>.
Github user zellerh commented on a diff in the pull request:

    https://github.com/apache/trafodion/pull/1429#discussion_r166089844
  
    --- Diff: core/sql/ustat/hs_log.cpp ---
    @@ -389,44 +371,64 @@ NABoolean HSLogMan::GetLogFile(NAString & path, const char* cqd_value)
     /*          until either StartLog() or         */
     /*          ClearLog() methods are called.     */
     /***********************************************/
    -void HSLogMan::StartLog(NABoolean needExplain, const char* logFileName)
    +void HSLogMan::StartLog(NABoolean needExplain)
       {
    -    // The GENERATE_EXPLAIN cqd captures explain data pertaining to dynamic
    -    // queries. Ordinarily we want it on, but for just-in-time logging triggered
    -    // by an error, we don't need it, and can't set it because HSFuncExecQuery
    -    // clears the diagnostics area, which causes the error to be lost.
    -    explainOn_ = needExplain;
    -    if (!needExplain ||
    -        HSFuncExecQuery("CONTROL QUERY DEFAULT GENERATE_EXPLAIN 'ON'") == 0)
    +    if (!logNeeded_)  // if logging isn't already on
           {
    -        CollIndex activeNodes = gpClusterInfo->numOfSMPs();
    -        if (logFileName)
    -        {
    -          *logFile_ = logFileName;
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    -        else if(activeNodes > 2)
    -        {//we consider we are running on cluster 
    -         //if gpClusterInfo->numOfSMPs() > 2
    -           NABoolean ret = FALSE;
    -           if(GetLogFile(*logFile_, ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG)))
    -             ret = ContainDirExist(logFile_->data());
    -
    -           if(ret)
    -             logNeeded_ = TRUE;
    -
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -        }
    +        // Construct logfile name incorporating process id and node number. Note that
    +        // the 2nd parameter of processhandle_decompose is named cpu but is actually
    +        // the node number for Seaquest (the 4th param, named nodenumber, is the cluster
    +        // number).
    +        Int32 nodeNum;
    +        Int32 pin;
    +        SB_Phandle_Type procHandle;
    +        XPROCESSHANDLE_GETMINE_(&procHandle);
    +        XPROCESSHANDLE_DECOMPOSE_(&procHandle, &nodeNum, &pin);
    +        long currentTime = (long)time(0);
    +
    +        const size_t MAX_FILENAME_SIZE = 50;
    +        char qualifiers[MAX_FILENAME_SIZE];
    +        sprintf(qualifiers, ".%d.%d.%ld.txt", nodeNum, pin, currentTime);
    +   
    +        std::string logFileName;
    +        QRLogger::getRootLogDirectory(CAT_SQL_USTAT, logFileName /* out */);
    +        if (logFileName.size() > 0)
    +          logFileName += '/';
    +
    +        const char * ustatLog = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    +        const char * fileNameStem = ustatLog + strlen(ustatLog);
    +        if (ustatLog == fileNameStem) 
    +          fileNameStem = "ULOG";  // CQD USTAT_LOG is the empty string
             else
    -        {
    -          *logFile_ = ActiveSchemaDB()->getDefaults().getValue(USTAT_LOG);
    -           currentTimingEvent_ = -1;
    -           startTime_[0] = 0;           /* reset timer           */
    -           logNeeded_ = TRUE;
    -        }
    +          {
    +            // strip off any directory path name; we will always use the logs directory
    +            // as configured via QRLogger
    +            while ((fileNameStem > ustatLog) && (*(fileNameStem - 1) != '/'))
    +              fileNameStem--;
    +          }
    +
    +        logFileName += fileNameStem;
    +        logFileName += qualifiers;
    +
    +        NABoolean logStarted = QRLogger::startLogFile(CAT_SQL_USTAT,logFileName.c_str());
    --- End diff --
    
    +1
    Looks good to me. Nice idea to make the format similar to the other log files, so we could easily change the event_log_reader UDF to pick up these ustat logs as well.


---