You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by An...@trendmicro.com on 2008/02/15 22:25:17 UTC

Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop

Hello,

My first time posting this in the news group.    My question sounds more like a MapReduce question 
instead of Hadoop HDFS itself.

To my understanding, the JobClient will submit all Mapper and Reduce class 
in a uniform way to the cluster?  Can I assume this is more like a uniform scheduler 
for all the task?

For example, if I have a 100 node cluster, 1 master (namenode), 99 slaves (datanodes).
When I do 
"JobClient.runJob(jconf)"
the JobClient will uniformly distributes all Mapper and Reduce class to all 99 nodes.

In the slaves, they will all have the same hadoop-site.xml and hadoop-default.xml.
Here comes the main concern, what if some of the nodes don't have the same hardware spec such as 
memory or CPU speed?  E.g. different batch purchase and repairment overtime that causes this.

Is there any way that the JobClient can be aware of this and submit different number of tasks to different slaves 
during start-up?
For example, for some slaves, it has 16 cores CPU instead of 8 cores.  The problem I see here is that 
for the 16 cores, only 8 cores are used.

P.S. I'm looking into the JobClient source code and JobProfile/JobTracker to see if this can be done.
But not sure if I am on the right track.

If this topic is more likely to be in the core-dev@hadoop.apache.org, please let me know.  I'll send another one to that news group.

Regards,
-Andy

TREND MICRO EMAIL NOTICE
The information contained in this email and any attachments is confidential and may be subject to copyright or other intellectual property protection. If you are not the intended recipient, you are not authorized to use or disclose this information, and we request that you notify us by reply mail or telephone and delete the original message from your mail system.

RE: Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop

Posted by Vivek Ratan <vi...@yahoo-inc.com>.
Andy, it's great that you're taking a deeper look at the scheduling code. I
don't think there is a complete document that describes what it does (the
code is the documentation, for good or for bad). But there has been some
concerted effort to improve the scheduler's performance and to make it take
other things into consideration (rack awareness, for example). Start with
http://issues.apache.org/jira/browse/HADOOP-2119, and also look at some of
the Jiras it references. This should give you an idea of what kinds of
changes people are looking at. The Jiras, especially 2119, should also have
enough discussions on how the scheduling currently works. 

I would also recommend that you look at
http://issues.apache.org/jira/browse/HADOOP-2491. This Jira is meant to
capture a more generic discussion on how to do better scheduling within the
MR framework. You could probably add some of your suggestions to it. 
 

-----Original Message-----
From: Eric Zhang [mailto:ezhang@yahoo-inc.com] 
Sent: Tuesday, February 19, 2008 11:50 AM
To: core-user@hadoop.apache.org
Subject: Re: Questions about the MapReduce libraries and job schedulers
inside JobTracker and JobClient running on Hadoop

The class is defined to be accessed in package level so not displayed in 
javadoc.   Source code comes with hadoop installation under 
${HADOOP_INSTALLATION_DIR}/src/java/org/apache/hadoop/mapred.

Eric
Andy Li wrote:
> Thanks for both inputs.  My question actually focus more on what Vivek 
> has mentioned.
>
> I would like to work on the JobClient to see how it submits jobs to 
> different file system and slaves in the same Hadoop cluster.
>
> Not sure if there is a complete document to explain the scheduler 
> underneath Hadoop, if not, I'll wrap up what I know and study from the 
> source code and submit it to the community once it is done.  Review 
> and comments are welcome.
>
> For the code, I couldn't find JobInProgress from the API index.  Could 
> anyone provide me a pointer to this?  Thanks.
>
> On Fri, Feb 15, 2008 at 3:01 PM, Vivek Ratan <vi...@yahoo-inc.com> wrote:
>
>   
>> I read Andy's question a little differently. For a given job, the 
>> JobTracker decides which tasks go to which TaskTracker (the TTs ask 
>> for a task to run and the JT decides which task is the most 
>> appropriate). Currently, the JT favors a task whose input data is on 
>> the same host as the TT (if there are more than one such tasks, it 
>> picks the one with the largest input size).
>> It
>> also looks at failed tasks and certain other criteria. This is very 
>> basic scheduling and there is a lot of scope for improvement. There 
>> currently is a proposal to support rack awareness, so that if the JT 
>> can't find a task whose input data is on the same host as the TT, it 
>> looks for a task whose data is on the same rack.
>>
>> You can clearly get more ambitious with your scheduling algorithm. As 
>> you mention, you could use other criteria for scheduling a task: 
>> available CPU or memory, for example. You could assign tasks to hosts 
>> that are the most 'free', or aim to distribute tasks across racks, or 
>> try some other load balancing techniques. I believe there are a few 
>> discussions on these methods on Jira, but I don't think there's 
>> anything concrete yet.
>>
>> BTW, the code that decides what task to run is primarily in 
>> JobInProgress::findNewTask().
>>
>>
>> -----Original Message-----
>> From: Ted Dunning [mailto:tdunning@veoh.com]
>> Sent: Friday, February 15, 2008 1:54 PM
>> To: core-user@hadoop.apache.org
>> Subject: Re: Questions about the MapReduce libraries and job 
>> schedulers inside JobTracker and JobClient running on Hadoop
>>
>>
>> Core-user is the right place for this question.
>>
>> Your description is mostly correct.  Jobs don't necessarily go to all 
>> of your boxes in the cluster, but they may.
>>
>> Non-uniform machine specs are a bit of a problem that is being (has 
>> been?) addressed by allowing each machine to have a slightly 
>> different hadoop-site.xml file.  That would allow different settings 
>> for storage configuration and number of processes to run.
>>
>> Even without that, you can level the load a bit by simply running 
>> more jobs on the weak machines than you would otherwise prefer.  Most 
>> map reduce programs are pretty light on memory usage so all that 
>> happens is that you get less throughput on the weak machines.  Since 
>> there are normally more map tasks than cores, this is no big deal; 
>> slow machines get fewer tasks and toward the end of the job, their 
>> tasks are even replicated on other machines in case they can be done 
>> more quickly.
>>
>>
>> On 2/15/08 1:25 PM, "Andrew_Lee@trendmicro.com" 
>> <Andrew_Lee@trendmicro.com
>>     
>> wrote:
>>
>>     
>>> Hello,
>>>
>>> My first time posting this in the news group.    My question sounds more
>>>       
>> like
>>     
>>> a MapReduce question
>>> instead of Hadoop HDFS itself.
>>>
>>> To my understanding, the JobClient will submit all Mapper and Reduce 
>>> class in a uniform way to the cluster?  Can I assume this is more 
>>> like a uniform scheduler for all the task?
>>>
>>> For example, if I have a 100 node cluster, 1 master (namenode), 99 
>>> slaves (datanodes).
>>> When I do
>>> "JobClient.runJob(jconf)"
>>> the JobClient will uniformly distributes all Mapper and Reduce class 
>>> to all 99 nodes.
>>>
>>> In the slaves, they will all have the same hadoop-site.xml and 
>>> hadoop-default.xml.
>>> Here comes the main concern, what if some of the nodes don't have 
>>> the same hardware spec such as memory or CPU speed?  E.g. different 
>>> batch purchase and repairment overtime that causes this.
>>>
>>> Is there any way that the JobClient can be aware of this and submit 
>>> different number of tasks to different slaves during start-up?
>>> For example, for some slaves, it has 16 cores CPU instead of 8 cores.
>>> The problem I see here is that for the 16 cores, only 8 cores are 
>>> used.
>>>
>>> P.S. I'm looking into the JobClient source code and 
>>> JobProfile/JobTracker to see if this can be done.
>>> But not sure if I am on the right track.
>>>
>>> If this topic is more likely to be in the 
>>> core-dev@hadoop.apache.org, please let me know.  I'll send another one
to that news group.
>>>
>>> Regards,
>>> -Andy
>>>
>>> TREND MICRO EMAIL NOTICE
>>> The information contained in this email and any attachments is 
>>> confidential and may be subject to copyright or other intellectual 
>>> property protection. If you are not the intended recipient, you are 
>>> not authorized to use or disclose this information, and we request 
>>> that you notify us by reply mail or telephone and delete the 
>>> original
>>>       
>> message from your mail system.
>>
>>
>>
>>     
>
>   



Re: Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop

Posted by Eric Zhang <ez...@yahoo-inc.com>.
The class is defined to be accessed in package level so not displayed in 
javadoc.   Source code comes with hadoop installation under 
${HADOOP_INSTALLATION_DIR}/src/java/org/apache/hadoop/mapred.

Eric
Andy Li wrote:
> Thanks for both inputs.  My question actually focus more on what Vivek has
> mentioned.
>
> I would like to work on the JobClient to see how it submits jobs to
> different file system and
> slaves in the same Hadoop cluster.
>
> Not sure if there is a complete document to explain the scheduler underneath
> Hadoop,
> if not, I'll wrap up what I know and study from the source code and submit
> it to the community
> once it is done.  Review and comments are welcome.
>
> For the code, I couldn't find JobInProgress from the API index.  Could
> anyone provide me
> a pointer to this?  Thanks.
>
> On Fri, Feb 15, 2008 at 3:01 PM, Vivek Ratan <vi...@yahoo-inc.com> wrote:
>
>   
>> I read Andy's question a little differently. For a given job, the
>> JobTracker
>> decides which tasks go to which TaskTracker (the TTs ask for a task to run
>> and the JT decides which task is the most appropriate). Currently, the JT
>> favors a task whose input data is on the same host as the TT (if there are
>> more than one such tasks, it picks the one with the largest input size).
>> It
>> also looks at failed tasks and certain other criteria. This is very basic
>> scheduling and there is a lot of scope for improvement. There currently is
>> a
>> proposal to support rack awareness, so that if the JT can't find a task
>> whose input data is on the same host as the TT, it looks for a task whose
>> data is on the same rack.
>>
>> You can clearly get more ambitious with your scheduling algorithm. As you
>> mention, you could use other criteria for scheduling a task: available CPU
>> or memory, for example. You could assign tasks to hosts that are the most
>> 'free', or aim to distribute tasks across racks, or try some other load
>> balancing techniques. I believe there are a few discussions on these
>> methods
>> on Jira, but I don't think there's anything concrete yet.
>>
>> BTW, the code that decides what task to run is primarily in
>> JobInProgress::findNewTask().
>>
>>
>> -----Original Message-----
>> From: Ted Dunning [mailto:tdunning@veoh.com]
>> Sent: Friday, February 15, 2008 1:54 PM
>> To: core-user@hadoop.apache.org
>> Subject: Re: Questions about the MapReduce libraries and job schedulers
>> inside JobTracker and JobClient running on Hadoop
>>
>>
>> Core-user is the right place for this question.
>>
>> Your description is mostly correct.  Jobs don't necessarily go to all of
>> your boxes in the cluster, but they may.
>>
>> Non-uniform machine specs are a bit of a problem that is being (has been?)
>> addressed by allowing each machine to have a slightly different
>> hadoop-site.xml file.  That would allow different settings for storage
>> configuration and number of processes to run.
>>
>> Even without that, you can level the load a bit by simply running more
>> jobs
>> on the weak machines than you would otherwise prefer.  Most map reduce
>> programs are pretty light on memory usage so all that happens is that you
>> get less throughput on the weak machines.  Since there are normally more
>> map
>> tasks than cores, this is no big deal; slow machines get fewer tasks and
>> toward the end of the job, their tasks are even replicated on other
>> machines
>> in case they can be done more quickly.
>>
>>
>> On 2/15/08 1:25 PM, "Andrew_Lee@trendmicro.com" <Andrew_Lee@trendmicro.com
>>     
>> wrote:
>>
>>     
>>> Hello,
>>>
>>> My first time posting this in the news group.    My question sounds more
>>>       
>> like
>>     
>>> a MapReduce question
>>> instead of Hadoop HDFS itself.
>>>
>>> To my understanding, the JobClient will submit all Mapper and Reduce
>>> class in a uniform way to the cluster?  Can I assume this is more like
>>> a uniform scheduler for all the task?
>>>
>>> For example, if I have a 100 node cluster, 1 master (namenode), 99
>>> slaves (datanodes).
>>> When I do
>>> "JobClient.runJob(jconf)"
>>> the JobClient will uniformly distributes all Mapper and Reduce class
>>> to all 99 nodes.
>>>
>>> In the slaves, they will all have the same hadoop-site.xml and
>>> hadoop-default.xml.
>>> Here comes the main concern, what if some of the nodes don't have the
>>> same hardware spec such as memory or CPU speed?  E.g. different batch
>>> purchase and repairment overtime that causes this.
>>>
>>> Is there any way that the JobClient can be aware of this and submit
>>> different number of tasks to different slaves during start-up?
>>> For example, for some slaves, it has 16 cores CPU instead of 8 cores.
>>> The problem I see here is that for the 16 cores, only 8 cores are
>>> used.
>>>
>>> P.S. I'm looking into the JobClient source code and
>>> JobProfile/JobTracker to see if this can be done.
>>> But not sure if I am on the right track.
>>>
>>> If this topic is more likely to be in the core-dev@hadoop.apache.org,
>>> please let me know.  I'll send another one to that news group.
>>>
>>> Regards,
>>> -Andy
>>>
>>> TREND MICRO EMAIL NOTICE
>>> The information contained in this email and any attachments is
>>> confidential and may be subject to copyright or other intellectual
>>> property protection. If you are not the intended recipient, you are
>>> not authorized to use or disclose this information, and we request
>>> that you notify us by reply mail or telephone and delete the original
>>>       
>> message from your mail system.
>>
>>
>>
>>     
>
>   


Re: Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop

Posted by Andy Li <an...@gmail.com>.
Thanks for both inputs.  My question actually focus more on what Vivek has
mentioned.

I would like to work on the JobClient to see how it submits jobs to
different file system and
slaves in the same Hadoop cluster.

Not sure if there is a complete document to explain the scheduler underneath
Hadoop,
if not, I'll wrap up what I know and study from the source code and submit
it to the community
once it is done.  Review and comments are welcome.

For the code, I couldn't find JobInProgress from the API index.  Could
anyone provide me
a pointer to this?  Thanks.

On Fri, Feb 15, 2008 at 3:01 PM, Vivek Ratan <vi...@yahoo-inc.com> wrote:

> I read Andy's question a little differently. For a given job, the
> JobTracker
> decides which tasks go to which TaskTracker (the TTs ask for a task to run
> and the JT decides which task is the most appropriate). Currently, the JT
> favors a task whose input data is on the same host as the TT (if there are
> more than one such tasks, it picks the one with the largest input size).
> It
> also looks at failed tasks and certain other criteria. This is very basic
> scheduling and there is a lot of scope for improvement. There currently is
> a
> proposal to support rack awareness, so that if the JT can't find a task
> whose input data is on the same host as the TT, it looks for a task whose
> data is on the same rack.
>
> You can clearly get more ambitious with your scheduling algorithm. As you
> mention, you could use other criteria for scheduling a task: available CPU
> or memory, for example. You could assign tasks to hosts that are the most
> 'free', or aim to distribute tasks across racks, or try some other load
> balancing techniques. I believe there are a few discussions on these
> methods
> on Jira, but I don't think there's anything concrete yet.
>
> BTW, the code that decides what task to run is primarily in
> JobInProgress::findNewTask().
>
>
> -----Original Message-----
> From: Ted Dunning [mailto:tdunning@veoh.com]
> Sent: Friday, February 15, 2008 1:54 PM
> To: core-user@hadoop.apache.org
> Subject: Re: Questions about the MapReduce libraries and job schedulers
> inside JobTracker and JobClient running on Hadoop
>
>
> Core-user is the right place for this question.
>
> Your description is mostly correct.  Jobs don't necessarily go to all of
> your boxes in the cluster, but they may.
>
> Non-uniform machine specs are a bit of a problem that is being (has been?)
> addressed by allowing each machine to have a slightly different
> hadoop-site.xml file.  That would allow different settings for storage
> configuration and number of processes to run.
>
> Even without that, you can level the load a bit by simply running more
> jobs
> on the weak machines than you would otherwise prefer.  Most map reduce
> programs are pretty light on memory usage so all that happens is that you
> get less throughput on the weak machines.  Since there are normally more
> map
> tasks than cores, this is no big deal; slow machines get fewer tasks and
> toward the end of the job, their tasks are even replicated on other
> machines
> in case they can be done more quickly.
>
>
> On 2/15/08 1:25 PM, "Andrew_Lee@trendmicro.com" <Andrew_Lee@trendmicro.com
> >
> wrote:
>
> >
> > Hello,
> >
> > My first time posting this in the news group.    My question sounds more
> like
> > a MapReduce question
> > instead of Hadoop HDFS itself.
> >
> > To my understanding, the JobClient will submit all Mapper and Reduce
> > class in a uniform way to the cluster?  Can I assume this is more like
> > a uniform scheduler for all the task?
> >
> > For example, if I have a 100 node cluster, 1 master (namenode), 99
> > slaves (datanodes).
> > When I do
> > "JobClient.runJob(jconf)"
> > the JobClient will uniformly distributes all Mapper and Reduce class
> > to all 99 nodes.
> >
> > In the slaves, they will all have the same hadoop-site.xml and
> > hadoop-default.xml.
> > Here comes the main concern, what if some of the nodes don't have the
> > same hardware spec such as memory or CPU speed?  E.g. different batch
> > purchase and repairment overtime that causes this.
> >
> > Is there any way that the JobClient can be aware of this and submit
> > different number of tasks to different slaves during start-up?
> > For example, for some slaves, it has 16 cores CPU instead of 8 cores.
> > The problem I see here is that for the 16 cores, only 8 cores are
> > used.
> >
> > P.S. I'm looking into the JobClient source code and
> > JobProfile/JobTracker to see if this can be done.
> > But not sure if I am on the right track.
> >
> > If this topic is more likely to be in the core-dev@hadoop.apache.org,
> > please let me know.  I'll send another one to that news group.
> >
> > Regards,
> > -Andy
> >
> > TREND MICRO EMAIL NOTICE
> > The information contained in this email and any attachments is
> > confidential and may be subject to copyright or other intellectual
> > property protection. If you are not the intended recipient, you are
> > not authorized to use or disclose this information, and we request
> > that you notify us by reply mail or telephone and delete the original
> message from your mail system.
>
>
>

RE: Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop

Posted by Vivek Ratan <vi...@yahoo-inc.com>.
I read Andy's question a little differently. For a given job, the JobTracker
decides which tasks go to which TaskTracker (the TTs ask for a task to run
and the JT decides which task is the most appropriate). Currently, the JT
favors a task whose input data is on the same host as the TT (if there are
more than one such tasks, it picks the one with the largest input size). It
also looks at failed tasks and certain other criteria. This is very basic
scheduling and there is a lot of scope for improvement. There currently is a
proposal to support rack awareness, so that if the JT can't find a task
whose input data is on the same host as the TT, it looks for a task whose
data is on the same rack. 

You can clearly get more ambitious with your scheduling algorithm. As you
mention, you could use other criteria for scheduling a task: available CPU
or memory, for example. You could assign tasks to hosts that are the most
'free', or aim to distribute tasks across racks, or try some other load
balancing techniques. I believe there are a few discussions on these methods
on Jira, but I don't think there's anything concrete yet. 

BTW, the code that decides what task to run is primarily in
JobInProgress::findNewTask(). 


-----Original Message-----
From: Ted Dunning [mailto:tdunning@veoh.com] 
Sent: Friday, February 15, 2008 1:54 PM
To: core-user@hadoop.apache.org
Subject: Re: Questions about the MapReduce libraries and job schedulers
inside JobTracker and JobClient running on Hadoop


Core-user is the right place for this question.

Your description is mostly correct.  Jobs don't necessarily go to all of
your boxes in the cluster, but they may.

Non-uniform machine specs are a bit of a problem that is being (has been?)
addressed by allowing each machine to have a slightly different
hadoop-site.xml file.  That would allow different settings for storage
configuration and number of processes to run.

Even without that, you can level the load a bit by simply running more jobs
on the weak machines than you would otherwise prefer.  Most map reduce
programs are pretty light on memory usage so all that happens is that you
get less throughput on the weak machines.  Since there are normally more map
tasks than cores, this is no big deal; slow machines get fewer tasks and
toward the end of the job, their tasks are even replicated on other machines
in case they can be done more quickly.


On 2/15/08 1:25 PM, "Andrew_Lee@trendmicro.com" <An...@trendmicro.com>
wrote:

> 
> Hello,
> 
> My first time posting this in the news group.    My question sounds more
like
> a MapReduce question
> instead of Hadoop HDFS itself.
> 
> To my understanding, the JobClient will submit all Mapper and Reduce 
> class in a uniform way to the cluster?  Can I assume this is more like 
> a uniform scheduler for all the task?
> 
> For example, if I have a 100 node cluster, 1 master (namenode), 99 
> slaves (datanodes).
> When I do
> "JobClient.runJob(jconf)"
> the JobClient will uniformly distributes all Mapper and Reduce class 
> to all 99 nodes.
> 
> In the slaves, they will all have the same hadoop-site.xml and 
> hadoop-default.xml.
> Here comes the main concern, what if some of the nodes don't have the 
> same hardware spec such as memory or CPU speed?  E.g. different batch 
> purchase and repairment overtime that causes this.
> 
> Is there any way that the JobClient can be aware of this and submit 
> different number of tasks to different slaves during start-up?
> For example, for some slaves, it has 16 cores CPU instead of 8 cores.  
> The problem I see here is that for the 16 cores, only 8 cores are 
> used.
> 
> P.S. I'm looking into the JobClient source code and 
> JobProfile/JobTracker to see if this can be done.
> But not sure if I am on the right track.
> 
> If this topic is more likely to be in the core-dev@hadoop.apache.org, 
> please let me know.  I'll send another one to that news group.
> 
> Regards,
> -Andy
> 
> TREND MICRO EMAIL NOTICE
> The information contained in this email and any attachments is 
> confidential and may be subject to copyright or other intellectual 
> property protection. If you are not the intended recipient, you are 
> not authorized to use or disclose this information, and we request 
> that you notify us by reply mail or telephone and delete the original
message from your mail system.



Re: Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop

Posted by Ted Dunning <td...@veoh.com>.
Core-user is the right place for this question.

Your description is mostly correct.  Jobs don't necessarily go to all of
your boxes in the cluster, but they may.

Non-uniform machine specs are a bit of a problem that is being (has been?)
addressed by allowing each machine to have a slightly different
hadoop-site.xml file.  That would allow different settings for storage
configuration and number of processes to run.

Even without that, you can level the load a bit by simply running more jobs
on the weak machines than you would otherwise prefer.  Most map reduce
programs are pretty light on memory usage so all that happens is that you
get less throughput on the weak machines.  Since there are normally more map
tasks than cores, this is no big deal; slow machines get fewer tasks and
toward the end of the job, their tasks are even replicated on other machines
in case they can be done more quickly.


On 2/15/08 1:25 PM, "Andrew_Lee@trendmicro.com" <An...@trendmicro.com>
wrote:

> 
> Hello,
> 
> My first time posting this in the news group.    My question sounds more like
> a MapReduce question
> instead of Hadoop HDFS itself.
> 
> To my understanding, the JobClient will submit all Mapper and Reduce class
> in a uniform way to the cluster?  Can I assume this is more like a uniform
> scheduler 
> for all the task?
> 
> For example, if I have a 100 node cluster, 1 master (namenode), 99 slaves
> (datanodes).
> When I do 
> "JobClient.runJob(jconf)"
> the JobClient will uniformly distributes all Mapper and Reduce class to all 99
> nodes.
> 
> In the slaves, they will all have the same hadoop-site.xml and
> hadoop-default.xml.
> Here comes the main concern, what if some of the nodes don't have the same
> hardware spec such as
> memory or CPU speed?  E.g. different batch purchase and repairment overtime
> that causes this.
> 
> Is there any way that the JobClient can be aware of this and submit different
> number of tasks to different slaves
> during start-up?
> For example, for some slaves, it has 16 cores CPU instead of 8 cores.  The
> problem I see here is that
> for the 16 cores, only 8 cores are used.
> 
> P.S. I'm looking into the JobClient source code and JobProfile/JobTracker to
> see if this can be done.
> But not sure if I am on the right track.
> 
> If this topic is more likely to be in the core-dev@hadoop.apache.org, please
> let me know.  I'll send another one to that news group.
> 
> Regards,
> -Andy
> 
> TREND MICRO EMAIL NOTICE
> The information contained in this email and any attachments is confidential
> and may be subject to copyright or other intellectual property protection. If
> you are not the intended recipient, you are not authorized to use or disclose
> this information, and we request that you notify us by reply mail or telephone
> and delete the original message from your mail system.