You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@trafodion.apache.org by Dave Birdsall <da...@esgyn.com> on 2016/08/22 22:59:22 UTC

Log files for SQL processes in Trafodion

Hi,



I’m interested in logging some information to log files from Trafodion SQL
processes.



I’m aware of the QRLogger class and the sqlmxevents directory.



The situation is this:



1.       In metadata upgrade, I want to drop user views on the Repository
tables.

2.       I’d like to be nice to the user and log the view text for him/her
so he/she can recreate them afterward if he/she desires.

3.       SQL log files are a pain to find. Let’s face it. On work stations,
they are in incubator-trafodion/core/sqf/logs and on clusters they are
somewhere else (I forget just where). And there are zillions of them, one
per process.

4.       So, I’d like to tell the user, as part of the metadata upgrade
output, just which log file I logged his view text to.



Now the question: How do I find the log file name programmatically? It
looks like QRLogger and sqlmxevents don’t provide a method for this. Under
the covers they use log4cxx and it’s not obvious how to get this info out
of log4cxx. I’m hoping someone knows the answer before I spend time teasing
it out of log4cxx.



If we think Trafodion users are well-versed in the location and searching
of log files, I can satisfy myself by putting some string in the log file
that they can grep for, but this seems clumsy and inelegant to me. I’d
rather just tell them exactly where to look.



Thanks in advance for your help,



Dave

Re: Log files for SQL processes in Trafodion

Posted by Lalitha M <ma...@gmail.com>.
Hi,
     Do user views need to be recreated because the entire metadata has to
be dropped and recreated?
     Ordering of views by create timestamp makes sure that the views
dependent on other views will make sure that when they are created it will
not fail because of dependency.
Lalitha.

On Mon, Aug 22, 2016 at 5:44 PM, Hans Zeller <ha...@esgyn.com> wrote:

> Hi Dave,
>
> The easiest solution may be to use the event_log_reader function to find
> that information. Maybe the upgrade operation could ask the user to run a
> query like this:
>
> select message from udf(event_log_reader('f'))
> where log_file_name = 'some filename'
>       /* and maybe some other conditions */
>
> order by log_ts;
>
>
> One issue may be the limited length of the text for this function, the
> message is only 4000 bytes, so you may need to cut the view text into
> pieces.
>
> Another solution that I think Oracle and IBM have chosen is to have an
> "operable" attribute of views. Rather than deleting them, one can make them
> inoperable, so that the text is still stored in the metadata, even though
> the view can no longer be used in queries. That can also preserve
> privileges on the view.
>
> Here is a comment from the code that describes the event_log_reader
> function:
>
> // -----------------------------------------------------------------
> // Function to read event log files generated by Trafodion C++ code
> //
> // SQL Syntax to invoke this function:
> //
> //  select * from udf(event_log_reader( [options] ));
> //
> // The optional [options] argument is a character constant. The
> // following options are supported:
> //  f: add file name output columns (see below)
> //  t: turn on tracing
> //  p: force parallel execution on workstation environment with
> //     virtual nodes (debug build only)
> //
> // Returned columns:
> //
> // log_ts        timestamp(6),
> // severity      char(10 bytes) character set utf8,
> // component     varchar(50 bytes) character set utf8,
> // node_number   integer,
> // cpu           integer,
> // pin           integer,
> // process_name  char(12 bytes) character set utf8,
> // sql_code      integer,
> // query_id      varchar(200 bytes) character set utf8,
> // message       varchar(4000 bytes) character set utf8
> //
> // if option "f" was specified, we have four more columns:
> //
> // log_file_node integer not null,
> // log_file_name varchar(200 bytes) character set utf8 not null,
> // log_file_line integer not null,
> // parse_status  char(2 bytes) character set utf8 not null
> //
> // (log_file_node, log_file_name, log_file_line) form a unique key
> // for each result row. parse_status indicates whether there were
> // any errors reading the information:
> // '  ' (two blanks): no errors
> // 'E'  (as first or second character): parse error
> // 'T'  (as first or second character): truncation or over/underflow
> //                                      occurred
> // 'C'  (as first or second character): character conversion error
> // -----------------------------------------------------------------
>
>
>
> Hans
>
> On Mon, Aug 22, 2016 at 5:38 PM, Sandhya Sundaresan <
> sandhya.sundaresan@esgyn.com> wrote:
>
> > Hi Dave,
> >    The mxosrvr logs to a log file in the $MY_SQROOT/logs directory and
> each
> > time an mxosrvr  startsup a file of the form master_exec_<node
> > name>_<mxosrvr pid>.log  is created. This is where all the log info goes.
> > So
> > you could put out a message to say look at that file. Since you can get
> the
> > current processes' pid and node number programmatically, you could put
> out
> > this message I guess.
> >
> > The log file location and logging Trafodion uses  is pretty standard -
> > nothing different than Hbase and other products.
> > Thanks
> > Sandhya
> >
> > -----Original Message-----
> > From: Dave Birdsall [mailto:dave.birdsall@esgyn.com]
> > Sent: Monday, August 22, 2016 3:59 PM
> > To: dev@trafodion.incubator.apache.org
> > Subject: Log files for SQL processes in Trafodion
> >
> > Hi,
> >
> >
> >
> > I’m interested in logging some information to log files from Trafodion
> SQL
> > processes.
> >
> >
> >
> > I’m aware of the QRLogger class and the sqlmxevents directory.
> >
> >
> >
> > The situation is this:
> >
> >
> >
> > 1.       In metadata upgrade, I want to drop user views on the Repository
> > tables.
> >
> > 2.       I’d like to be nice to the user and log the view text for
> him/her
> > so he/she can recreate them afterward if he/she desires.
> >
> > 3.       SQL log files are a pain to find. Let’s face it. On work
> stations,
> > they are in incubator-trafodion/core/sqf/logs and on clusters they are
> > somewhere else (I forget just where). And there are zillions of them, one
> > per process.
> >
> > 4.       So, I’d like to tell the user, as part of the metadata upgrade
> > output, just which log file I logged his view text to.
> >
> >
> >
> > Now the question: How do I find the log file name programmatically? It
> > looks
> > like QRLogger and sqlmxevents don’t provide a method for this. Under the
> > covers they use log4cxx and it’s not obvious how to get this info out of
> > log4cxx. I’m hoping someone knows the answer before I spend time teasing
> it
> > out of log4cxx.
> >
> >
> >
> > If we think Trafodion users are well-versed in the location and searching
> > of
> > log files, I can satisfy myself by putting some string in the log file
> that
> > they can grep for, but this seems clumsy and inelegant to me. I’d rather
> > just tell them exactly where to look.
> >
> >
> >
> > Thanks in advance for your help,
> >
> >
> >
> > Dave
> >
>

Re: Log files for SQL processes in Trafodion

Posted by Hans Zeller <ha...@esgyn.com>.
Hi Dave,

The easiest solution may be to use the event_log_reader function to find
that information. Maybe the upgrade operation could ask the user to run a
query like this:

select message from udf(event_log_reader('f'))
where log_file_name = 'some filename'
      /* and maybe some other conditions */

order by log_ts;


One issue may be the limited length of the text for this function, the
message is only 4000 bytes, so you may need to cut the view text into
pieces.

Another solution that I think Oracle and IBM have chosen is to have an
"operable" attribute of views. Rather than deleting them, one can make them
inoperable, so that the text is still stored in the metadata, even though
the view can no longer be used in queries. That can also preserve
privileges on the view.

Here is a comment from the code that describes the event_log_reader
function:

// -----------------------------------------------------------------
// Function to read event log files generated by Trafodion C++ code
//
// SQL Syntax to invoke this function:
//
//  select * from udf(event_log_reader( [options] ));
//
// The optional [options] argument is a character constant. The
// following options are supported:
//  f: add file name output columns (see below)
//  t: turn on tracing
//  p: force parallel execution on workstation environment with
//     virtual nodes (debug build only)
//
// Returned columns:
//
// log_ts        timestamp(6),
// severity      char(10 bytes) character set utf8,
// component     varchar(50 bytes) character set utf8,
// node_number   integer,
// cpu           integer,
// pin           integer,
// process_name  char(12 bytes) character set utf8,
// sql_code      integer,
// query_id      varchar(200 bytes) character set utf8,
// message       varchar(4000 bytes) character set utf8
//
// if option "f" was specified, we have four more columns:
//
// log_file_node integer not null,
// log_file_name varchar(200 bytes) character set utf8 not null,
// log_file_line integer not null,
// parse_status  char(2 bytes) character set utf8 not null
//
// (log_file_node, log_file_name, log_file_line) form a unique key
// for each result row. parse_status indicates whether there were
// any errors reading the information:
// '  ' (two blanks): no errors
// 'E'  (as first or second character): parse error
// 'T'  (as first or second character): truncation or over/underflow
//                                      occurred
// 'C'  (as first or second character): character conversion error
// -----------------------------------------------------------------



Hans

On Mon, Aug 22, 2016 at 5:38 PM, Sandhya Sundaresan <
sandhya.sundaresan@esgyn.com> wrote:

> Hi Dave,
>    The mxosrvr logs to a log file in the $MY_SQROOT/logs directory and each
> time an mxosrvr  startsup a file of the form master_exec_<node
> name>_<mxosrvr pid>.log  is created. This is where all the log info goes.
> So
> you could put out a message to say look at that file. Since you can get the
> current processes' pid and node number programmatically, you could put out
> this message I guess.
>
> The log file location and logging Trafodion uses  is pretty standard -
> nothing different than Hbase and other products.
> Thanks
> Sandhya
>
> -----Original Message-----
> From: Dave Birdsall [mailto:dave.birdsall@esgyn.com]
> Sent: Monday, August 22, 2016 3:59 PM
> To: dev@trafodion.incubator.apache.org
> Subject: Log files for SQL processes in Trafodion
>
> Hi,
>
>
>
> I’m interested in logging some information to log files from Trafodion SQL
> processes.
>
>
>
> I’m aware of the QRLogger class and the sqlmxevents directory.
>
>
>
> The situation is this:
>
>
>
> 1.       In metadata upgrade, I want to drop user views on the Repository
> tables.
>
> 2.       I’d like to be nice to the user and log the view text for him/her
> so he/she can recreate them afterward if he/she desires.
>
> 3.       SQL log files are a pain to find. Let’s face it. On work stations,
> they are in incubator-trafodion/core/sqf/logs and on clusters they are
> somewhere else (I forget just where). And there are zillions of them, one
> per process.
>
> 4.       So, I’d like to tell the user, as part of the metadata upgrade
> output, just which log file I logged his view text to.
>
>
>
> Now the question: How do I find the log file name programmatically? It
> looks
> like QRLogger and sqlmxevents don’t provide a method for this. Under the
> covers they use log4cxx and it’s not obvious how to get this info out of
> log4cxx. I’m hoping someone knows the answer before I spend time teasing it
> out of log4cxx.
>
>
>
> If we think Trafodion users are well-versed in the location and searching
> of
> log files, I can satisfy myself by putting some string in the log file that
> they can grep for, but this seems clumsy and inelegant to me. I’d rather
> just tell them exactly where to look.
>
>
>
> Thanks in advance for your help,
>
>
>
> Dave
>

RE: Log files for SQL processes in Trafodion

Posted by Sandhya Sundaresan <sa...@esgyn.com>.
Hi Dave,
   The mxosrvr logs to a log file in the $MY_SQROOT/logs directory and each
time an mxosrvr  startsup a file of the form master_exec_<node
name>_<mxosrvr pid>.log  is created. This is where all the log info goes. So
you could put out a message to say look at that file. Since you can get the
current processes' pid and node number programmatically, you could put out
this message I guess.

The log file location and logging Trafodion uses  is pretty standard -
nothing different than Hbase and other products.
Thanks
Sandhya

-----Original Message-----
From: Dave Birdsall [mailto:dave.birdsall@esgyn.com]
Sent: Monday, August 22, 2016 3:59 PM
To: dev@trafodion.incubator.apache.org
Subject: Log files for SQL processes in Trafodion

Hi,



I’m interested in logging some information to log files from Trafodion SQL
processes.



I’m aware of the QRLogger class and the sqlmxevents directory.



The situation is this:



1.       In metadata upgrade, I want to drop user views on the Repository
tables.

2.       I’d like to be nice to the user and log the view text for him/her
so he/she can recreate them afterward if he/she desires.

3.       SQL log files are a pain to find. Let’s face it. On work stations,
they are in incubator-trafodion/core/sqf/logs and on clusters they are
somewhere else (I forget just where). And there are zillions of them, one
per process.

4.       So, I’d like to tell the user, as part of the metadata upgrade
output, just which log file I logged his view text to.



Now the question: How do I find the log file name programmatically? It looks
like QRLogger and sqlmxevents don’t provide a method for this. Under the
covers they use log4cxx and it’s not obvious how to get this info out of
log4cxx. I’m hoping someone knows the answer before I spend time teasing it
out of log4cxx.



If we think Trafodion users are well-versed in the location and searching of
log files, I can satisfy myself by putting some string in the log file that
they can grep for, but this seems clumsy and inelegant to me. I’d rather
just tell them exactly where to look.



Thanks in advance for your help,



Dave