You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by "☼ R Nair (रविशंकर नायर)" <ra...@gmail.com> on 2015/08/21 19:35:17 UTC

Chaining MapReduce

All,

I have three mappers, followed by a reducer. I executed the map reduce
successfully. The reported output shows that number of mappers executed is
1 and number of reducers is also 1. Though number of reducers are correct,
won't we be getting number of mappers as 3 , since I have three mapper
classes connected by ChainMapper?

O/P given below (snippet) :-

Job Counters
        Launched map tasks=1
        Launched reduce tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=8853
        Total time spent by all reduces in occupied slots (ms)=9900
        Total time spent by all map tasks (ms)=8853
        Total time spent by all reduce tasks (ms)=9900
        Total vcore-seconds taken by all map tasks=8853
        Total vcore-seconds taken by all reduce tasks=9900
        Total megabyte-seconds taken by all map tasks=9065472
        Total megabyte-seconds taken by all reduce tasks=10137600


What I guess is, since the output is passing through Context, the internal
connected mappers are not caught by job counter, am I correct ?

Best, Ravion

Hadoop Advanced Course

Posted by Gurmukh Singh <gu...@yahoo.com>.
Batch Starting Soon: Advanced Hadoop: Performance Tuning and Security

Duration: 21 hours

Module 1: Hadoop High Availability for HDFS and Resource Manager.
     Using both JQM and Shared storage.

Module 2: Hadoop Queuing and pools details.
Fair and Capacity Scheduler details.
Dynamic pool configuration.
User management and LDAP integration.

Module 3: HDFS Advanced Features
Hadoop Centralised Caching.
Hadoop Storage Policy and Archive Storage.
Hadoop memory as storage tier.
HDFS Extended Attributes.
HDFS Short circuit Read.
Quotas per storage type.
Snapshots and HDFS over NFS

Module 4: In-depth Performance tuning and Cluster Sizing.
JVM tuning for Hadoop.
HDFS and MapReduce Tuning.
Network tuning.

Module 5: Hadoop Security.
Hadoop Knox.
Detailed kerberos setup for securing Hadoop.
Hadoop Encryption.

Module 6: Hadoop Upgrade and Production use cases.
Hadoop Rolling upgrade.
Phoenix details and setup.
HDFS Configuration for multihoming.
Namenode Recovery scenarios
Common production Issues.

Important:
- Only for People with Solid Linux Fundamentals.
- Only for those who have prior Hadoop Experience.

Please reach out to trainings@netxillon.com for details


Hadoop Advanced Course

Posted by Gurmukh Singh <gu...@yahoo.com>.
Batch Starting Soon: Advanced Hadoop: Performance Tuning and Security

Duration: 21 hours

Module 1: Hadoop High Availability for HDFS and Resource Manager.
     Using both JQM and Shared storage.

Module 2: Hadoop Queuing and pools details.
Fair and Capacity Scheduler details.
Dynamic pool configuration.
User management and LDAP integration.

Module 3: HDFS Advanced Features
Hadoop Centralised Caching.
Hadoop Storage Policy and Archive Storage.
Hadoop memory as storage tier.
HDFS Extended Attributes.
HDFS Short circuit Read.
Quotas per storage type.
Snapshots and HDFS over NFS

Module 4: In-depth Performance tuning and Cluster Sizing.
JVM tuning for Hadoop.
HDFS and MapReduce Tuning.
Network tuning.

Module 5: Hadoop Security.
Hadoop Knox.
Detailed kerberos setup for securing Hadoop.
Hadoop Encryption.

Module 6: Hadoop Upgrade and Production use cases.
Hadoop Rolling upgrade.
Phoenix details and setup.
HDFS Configuration for multihoming.
Namenode Recovery scenarios
Common production Issues.

Important:
- Only for People with Solid Linux Fundamentals.
- Only for those who have prior Hadoop Experience.

Please reach out to trainings@netxillon.com for details


Hadoop Advanced Course

Posted by Gurmukh Singh <gu...@yahoo.com>.
Batch Starting Soon: Advanced Hadoop: Performance Tuning and Security

Duration: 21 hours

Module 1: Hadoop High Availability for HDFS and Resource Manager.
     Using both JQM and Shared storage.

Module 2: Hadoop Queuing and pools details.
Fair and Capacity Scheduler details.
Dynamic pool configuration.
User management and LDAP integration.

Module 3: HDFS Advanced Features
Hadoop Centralised Caching.
Hadoop Storage Policy and Archive Storage.
Hadoop memory as storage tier.
HDFS Extended Attributes.
HDFS Short circuit Read.
Quotas per storage type.
Snapshots and HDFS over NFS

Module 4: In-depth Performance tuning and Cluster Sizing.
JVM tuning for Hadoop.
HDFS and MapReduce Tuning.
Network tuning.

Module 5: Hadoop Security.
Hadoop Knox.
Detailed kerberos setup for securing Hadoop.
Hadoop Encryption.

Module 6: Hadoop Upgrade and Production use cases.
Hadoop Rolling upgrade.
Phoenix details and setup.
HDFS Configuration for multihoming.
Namenode Recovery scenarios
Common production Issues.

Important:
- Only for People with Solid Linux Fundamentals.
- Only for those who have prior Hadoop Experience.

Please reach out to trainings@netxillon.com for details


Hadoop Advanced Course

Posted by Gurmukh Singh <gu...@yahoo.com>.
Batch Starting Soon: Advanced Hadoop: Performance Tuning and Security

Duration: 21 hours

Module 1: Hadoop High Availability for HDFS and Resource Manager.
     Using both JQM and Shared storage.

Module 2: Hadoop Queuing and pools details.
Fair and Capacity Scheduler details.
Dynamic pool configuration.
User management and LDAP integration.

Module 3: HDFS Advanced Features
Hadoop Centralised Caching.
Hadoop Storage Policy and Archive Storage.
Hadoop memory as storage tier.
HDFS Extended Attributes.
HDFS Short circuit Read.
Quotas per storage type.
Snapshots and HDFS over NFS

Module 4: In-depth Performance tuning and Cluster Sizing.
JVM tuning for Hadoop.
HDFS and MapReduce Tuning.
Network tuning.

Module 5: Hadoop Security.
Hadoop Knox.
Detailed kerberos setup for securing Hadoop.
Hadoop Encryption.

Module 6: Hadoop Upgrade and Production use cases.
Hadoop Rolling upgrade.
Phoenix details and setup.
HDFS Configuration for multihoming.
Namenode Recovery scenarios
Common production Issues.

Important:
- Only for People with Solid Linux Fundamentals.
- Only for those who have prior Hadoop Experience.

Please reach out to trainings@netxillon.com for details


Re: Chaining MapReduce

Posted by Daniel Haviv <da...@gmail.com>.
Hi,
Data is divided to mappers depending on your inputformat.
Usually the number of mappers = number of blocks.

Daniel

> On 22 באוג׳ 2015, at 09:02, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
> 
> Hi ,
> 
> The mappers depend on source data only. But data definitely is going through all mappers, so I should get number of map jpbs as my output right? Instead I am getting only one.
> 
> Thanks and regards,
> Ravion
> 
>> On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
>> All,
>> 
>> I have three mappers, followed by a reducer. I executed the map reduce successfully. The reported output shows that number of mappers executed is 1 and number of reducers is also 1. Though number of reducers are correct, won't we be getting number of mappers as 3 , since I have three mapper classes connected by ChainMapper?
>> 
>> O/P given below (snippet) :-
>> 
>> Job Counters 
>>         Launched map tasks=1
>>         Launched reduce tasks=1
>>         Data-local map tasks=1
>>         Total time spent by all maps in occupied slots (ms)=8853
>>         Total time spent by all reduces in occupied slots (ms)=9900
>>         Total time spent by all map tasks (ms)=8853
>>         Total time spent by all reduce tasks (ms)=9900
>>         Total vcore-seconds taken by all map tasks=8853
>>         Total vcore-seconds taken by all reduce tasks=9900
>>         Total megabyte-seconds taken by all map tasks=9065472
>>         Total megabyte-seconds taken by all reduce tasks=10137600
>> 
>> 
>> What I guess is, since the output is passing through Context, the internal connected mappers are not caught by job counter, am I correct ?
>> 
>> Best, Ravion
> 

Re: Chaining MapReduce

Posted by Daniel Haviv <da...@gmail.com>.
Hi,
Data is divided to mappers depending on your inputformat.
Usually the number of mappers = number of blocks.

Daniel

> On 22 באוג׳ 2015, at 09:02, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
> 
> Hi ,
> 
> The mappers depend on source data only. But data definitely is going through all mappers, so I should get number of map jpbs as my output right? Instead I am getting only one.
> 
> Thanks and regards,
> Ravion
> 
>> On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
>> All,
>> 
>> I have three mappers, followed by a reducer. I executed the map reduce successfully. The reported output shows that number of mappers executed is 1 and number of reducers is also 1. Though number of reducers are correct, won't we be getting number of mappers as 3 , since I have three mapper classes connected by ChainMapper?
>> 
>> O/P given below (snippet) :-
>> 
>> Job Counters 
>>         Launched map tasks=1
>>         Launched reduce tasks=1
>>         Data-local map tasks=1
>>         Total time spent by all maps in occupied slots (ms)=8853
>>         Total time spent by all reduces in occupied slots (ms)=9900
>>         Total time spent by all map tasks (ms)=8853
>>         Total time spent by all reduce tasks (ms)=9900
>>         Total vcore-seconds taken by all map tasks=8853
>>         Total vcore-seconds taken by all reduce tasks=9900
>>         Total megabyte-seconds taken by all map tasks=9065472
>>         Total megabyte-seconds taken by all reduce tasks=10137600
>> 
>> 
>> What I guess is, since the output is passing through Context, the internal connected mappers are not caught by job counter, am I correct ?
>> 
>> Best, Ravion
> 

Re: Chaining MapReduce

Posted by Daniel Haviv <da...@gmail.com>.
Hi,
Data is divided to mappers depending on your inputformat.
Usually the number of mappers = number of blocks.

Daniel

> On 22 באוג׳ 2015, at 09:02, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
> 
> Hi ,
> 
> The mappers depend on source data only. But data definitely is going through all mappers, so I should get number of map jpbs as my output right? Instead I am getting only one.
> 
> Thanks and regards,
> Ravion
> 
>> On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
>> All,
>> 
>> I have three mappers, followed by a reducer. I executed the map reduce successfully. The reported output shows that number of mappers executed is 1 and number of reducers is also 1. Though number of reducers are correct, won't we be getting number of mappers as 3 , since I have three mapper classes connected by ChainMapper?
>> 
>> O/P given below (snippet) :-
>> 
>> Job Counters 
>>         Launched map tasks=1
>>         Launched reduce tasks=1
>>         Data-local map tasks=1
>>         Total time spent by all maps in occupied slots (ms)=8853
>>         Total time spent by all reduces in occupied slots (ms)=9900
>>         Total time spent by all map tasks (ms)=8853
>>         Total time spent by all reduce tasks (ms)=9900
>>         Total vcore-seconds taken by all map tasks=8853
>>         Total vcore-seconds taken by all reduce tasks=9900
>>         Total megabyte-seconds taken by all map tasks=9065472
>>         Total megabyte-seconds taken by all reduce tasks=10137600
>> 
>> 
>> What I guess is, since the output is passing through Context, the internal connected mappers are not caught by job counter, am I correct ?
>> 
>> Best, Ravion
> 

Re: Chaining MapReduce

Posted by Daniel Haviv <da...@gmail.com>.
Hi,
Data is divided to mappers depending on your inputformat.
Usually the number of mappers = number of blocks.

Daniel

> On 22 באוג׳ 2015, at 09:02, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
> 
> Hi ,
> 
> The mappers depend on source data only. But data definitely is going through all mappers, so I should get number of map jpbs as my output right? Instead I am getting only one.
> 
> Thanks and regards,
> Ravion
> 
>> On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <ra...@gmail.com> wrote:
>> All,
>> 
>> I have three mappers, followed by a reducer. I executed the map reduce successfully. The reported output shows that number of mappers executed is 1 and number of reducers is also 1. Though number of reducers are correct, won't we be getting number of mappers as 3 , since I have three mapper classes connected by ChainMapper?
>> 
>> O/P given below (snippet) :-
>> 
>> Job Counters 
>>         Launched map tasks=1
>>         Launched reduce tasks=1
>>         Data-local map tasks=1
>>         Total time spent by all maps in occupied slots (ms)=8853
>>         Total time spent by all reduces in occupied slots (ms)=9900
>>         Total time spent by all map tasks (ms)=8853
>>         Total time spent by all reduce tasks (ms)=9900
>>         Total vcore-seconds taken by all map tasks=8853
>>         Total vcore-seconds taken by all reduce tasks=9900
>>         Total megabyte-seconds taken by all map tasks=9065472
>>         Total megabyte-seconds taken by all reduce tasks=10137600
>> 
>> 
>> What I guess is, since the output is passing through Context, the internal connected mappers are not caught by job counter, am I correct ?
>> 
>> Best, Ravion
> 

Re: Chaining MapReduce

Posted by "☼ R Nair (रविशंकर नायर)" <ra...@gmail.com>.
Hi ,

The mappers depend on source data only. But data definitely is going
through all mappers, so I should get number of map jpbs as my output right?
Instead I am getting only one.

Thanks and regards,
Ravion

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>

Re: Chaining MapReduce

Posted by "☼ R Nair (रविशंकर नायर)" <ra...@gmail.com>.
Hi ,

The mappers depend on source data only. But data definitely is going
through all mappers, so I should get number of map jpbs as my output right?
Instead I am getting only one.

Thanks and regards,
Ravion

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>

Re: Chaining MapReduce

Posted by Shahab Yunus <sh...@gmail.com>.
What is the different between the mappers? Is the input data suppose to go
to all mappers or it is dependent on the source data?

Regards,
Shahab

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>

Re: Chaining MapReduce

Posted by Shahab Yunus <sh...@gmail.com>.
What is the different between the mappers? Is the input data suppose to go
to all mappers or it is dependent on the source data?

Regards,
Shahab

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>

Re: Chaining MapReduce

Posted by "☼ R Nair (रविशंकर नायर)" <ra...@gmail.com>.
Hi ,

The mappers depend on source data only. But data definitely is going
through all mappers, so I should get number of map jpbs as my output right?
Instead I am getting only one.

Thanks and regards,
Ravion

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>

Re: Chaining MapReduce

Posted by "☼ R Nair (रविशंकर नायर)" <ra...@gmail.com>.
Hi ,

The mappers depend on source data only. But data definitely is going
through all mappers, so I should get number of map jpbs as my output right?
Instead I am getting only one.

Thanks and regards,
Ravion

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>

Re: Chaining MapReduce

Posted by Shahab Yunus <sh...@gmail.com>.
What is the different between the mappers? Is the input data suppose to go
to all mappers or it is dependent on the source data?

Regards,
Shahab

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>

Re: Chaining MapReduce

Posted by Shahab Yunus <sh...@gmail.com>.
What is the different between the mappers? Is the input data suppose to go
to all mappers or it is dependent on the source data?

Regards,
Shahab

On Fri, Aug 21, 2015 at 1:35 PM, ☼ R Nair (रविशंकर नायर) <
ravishankar.nair@gmail.com> wrote:

> All,
>
> I have three mappers, followed by a reducer. I executed the map reduce
> successfully. The reported output shows that number of mappers executed is
> 1 and number of reducers is also 1. Though number of reducers are correct,
> won't we be getting number of mappers as 3 , since I have three mapper
> classes connected by ChainMapper?
>
> O/P given below (snippet) :-
>
> Job Counters
>         Launched map tasks=1
>         Launched reduce tasks=1
>         Data-local map tasks=1
>         Total time spent by all maps in occupied slots (ms)=8853
>         Total time spent by all reduces in occupied slots (ms)=9900
>         Total time spent by all map tasks (ms)=8853
>         Total time spent by all reduce tasks (ms)=9900
>         Total vcore-seconds taken by all map tasks=8853
>         Total vcore-seconds taken by all reduce tasks=9900
>         Total megabyte-seconds taken by all map tasks=9065472
>         Total megabyte-seconds taken by all reduce tasks=10137600
>
>
> What I guess is, since the output is passing through Context, the internal
> connected mappers are not caught by job counter, am I correct ?
>
> Best, Ravion
>