You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Ranjit Mathew (JIRA)" <ji...@apache.org> on 2010/12/07 11:48:11 UTC

[jira] Commented: (MAPREDUCE-2192) Implement gridmix system tests with different time intervals for MR streaming job traces.

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968673#action_12968673 ] 

Ranjit Mathew commented on MAPREDUCE-2192:
------------------------------------------

Just a minor comment: Instead of saying ??Trace file has not found??, say ??Trace file was not found??.

Other than that, the patch looks OK in conjunction with that for MAPREDUCE-2138.

Thanks for doing this.

> Implement gridmix system tests with different time intervals for MR streaming job traces.
> -----------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2192
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2192
>             Project: Hadoop Map/Reduce
>          Issue Type: Task
>          Components: contrib/gridmix
>            Reporter: Vinay Kumar Thota
>            Assignee: Vinay Kumar Thota
>         Attachments: MAPREDUCE-2192.patch, MAPREDUCE-2192.patch
>
>
> Develop gridmix system tests for below scenarios by using different time intervals of  MR streaming jobs.
> 1. Generate input data based on cluster size and create the synthetic jobs by using the 2 min folded MR streaming jobs trace and submit the jobs with below arguments.
> GRIDMIX_JOB_TYPE = LOADJOB
> GRIDMIX_USER_RESOLVER = SubmitterUserResolver
> GRIDMIX_SUBMISSION_POLICY = STRESS
> GRIDMIX_JOB_SUBMISSION_QUEUE_IN_TRACE = True
> Input Size = 250 MB * No. of nodes in cluster.
> MINIMUM_FILE_SIZE=150MB
> TRACE_FILE = 2 min folded trace.
> Verify JobStatus for each job, input split size for each job and summary (QueueName, UserName, StatTime, FinishTime, maps, reducers and counters etc) after completion of execution.
> 2.  Generate input data based on cluster size and create the synthetic jobs by using the 3 min folded MR streaming jobs trace and submit the jobs with below arguments.
> GRIDMIX_JOB_TYPE = LoadJob
> GRIDMIX_USER_RESOLVER = RoundRobinUserResolver
> GRIDMIX_BYTES_PER_FILE = 150 MB
> GRIDMIX_SUBMISSION_POLICY = REPLAY
> GRIDMIX_JOB_SUBMISSION_QUEUE_IN_TRACE = True
> Input Size = 200 MB * No. of nodes in cluster.
> PROXY_USERS = proxy users file path
> TRACE_FILE = 3 min folded trace.
> Verify JobStatus for each job, input split size for each job and summary (QueueName, UserName, StatTime, FinishTime, maps, reducers and counters etc) after completion of execution.
> 3. Generate input data based on cluster size and create the synthetic jobs by using the 5 min MR streaming jobs trace and submit the jobs with below arguments.
> GRIDMIX_JOB_TYPE = LoadJob
> GRIDMIX_USER_RESOLVER = SubmitterUserResolver
> GRIDMIX_SUBMISSION_POLICY = SERIAL
> GRIDMIX_JOB_SUBMISSION_QUEUE_IN_TRACE = false
> GRIDMIX_KEY_FRC = 0.5f
> Input Size = 200MB * No. of nodes in cluster.
> TRACE_FILE = 5 min folded trace.
> Verify JobStatus for each job and summary (QueueName, UserName, StatTime, FinishTime, MAPS, REDUCERS and COUNTERS etc) after completion of execution.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.