You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Satoshi Yamada <bi...@yahoo.co.jp> on 2010/11/15 15:17:46 UTC

NNBench in Hadoop 0.21.0

Hi,

I just ran NNBench in my cluster with hadoop-0.21.0 and got exceptions. I had no successful file operations.

I've run NNBench in the same cluster with hadoop-0.20.2 without any exceptions or # maps that missed the barrier.

So, I would like to ask if you have some tips to run NNBench in hadoop-0.21.0 or if it is a bug.


Thanks, in advance.
satoshi 

Here are outputs and logs.


 ***********************************************************
 
[hadoop@guest0 hadoop-0.21.0]$ ./bin/hadoop jar hadoop-mapred-test-0.21.0.jar nnbench -operation create_write
NameNode Benchmark 0.4
10/11/15 12:57:47 INFO hdfs.NNBench: Test Inputs: 
10/11/15 12:57:47 INFO hdfs.NNBench:            Test Operation: create_write
10/11/15 12:57:47 INFO hdfs.NNBench:                Start time: 2010-11-15 12:59:47,598
10/11/15 12:57:47 INFO hdfs.NNBench:            Number of maps: 1
10/11/15 12:57:47 INFO hdfs.NNBench:         Number of reduces: 1
10/11/15 12:57:47 INFO hdfs.NNBench:                Block Size: 1
10/11/15 12:57:47 INFO hdfs.NNBench:            Bytes to write: 0
10/11/15 12:57:47 INFO hdfs.NNBench:        Bytes per checksum: 1
10/11/15 12:57:47 INFO hdfs.NNBench:           Number of files: 1
10/11/15 12:57:47 INFO hdfs.NNBench:        Replication factor: 1
10/11/15 12:57:47 INFO hdfs.NNBench:                  Base dir: /benchmarks/NNBench
10/11/15 12:57:47 INFO hdfs.NNBench:      Read file after open: false
10/11/15 12:57:47 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
10/11/15 12:57:47 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
10/11/15 12:57:48 INFO hdfs.NNBench: Deleting data directory
10/11/15 12:57:48 INFO hdfs.NNBench: Creating 1 control files
10/11/15 12:57:52 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/11/15 12:57:52 INFO mapred.FileInputFormat: Total input paths to process : 1
10/11/15 12:57:52 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
10/11/15 12:57:52 INFO mapreduce.JobSubmitter: number of splits:1
10/11/15 12:57:53 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
10/11/15 12:57:53 INFO mapreduce.Job: Running job: job_201011151256_0001
10/11/15 12:57:54 INFO mapreduce.Job:  map 0% reduce 0%
10/11/15 12:58:09 INFO mapreduce.Job:  map 66% reduce 0%
10/11/15 12:59:52 INFO mapreduce.Job:  map 100% reduce 0%
10/11/15 12:59:58 INFO mapreduce.Job:  map 100% reduce 100%
10/11/15 13:00:00 INFO mapreduce.Job: Job complete: job_201011151256_0001
10/11/15 13:00:00 INFO mapreduce.Job: Counters: 33
        FileInputFormatCounters
                BYTES_READ=38
        FileSystemCounters
                FILE_BYTES_READ=187
                FILE_BYTES_WRITTEN=406
                HDFS_BYTES_READ=244
                HDFS_BYTES_WRITTEN=167
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        Job Counters 
                Data-local map tasks=1
                Total time spent by all maps waiting after reserving slots (ms)=0
                Total time spent by all reduces waiting after reserving slots (ms)=0
                SLOTS_MILLIS_MAPS=111798
                SLOTS_MILLIS_REDUCES=3516
                Launched map tasks=1
                Launched reduce tasks=1
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Failed Shuffles=0
                GC time elapsed (ms)=79
                Map input records=1
                Map output bytes=167
                Map output records=7
                Merged Map outputs=1
                Reduce input groups=7
                Reduce input records=7
                Reduce output records=7
                Reduce shuffle bytes=187
                Shuffled Maps =1
                Spilled Records=14
                SPLIT_RAW_BYTES=120
10/11/15 13:00:00 INFO hdfs.NNBench: -------------- NNBench -------------- : 
10/11/15 13:00:00 INFO hdfs.NNBench:                                Version: NameNode Benchmark 0.4
10/11/15 13:00:00 INFO hdfs.NNBench:                            Date & time: 2010-11-15 13:00:00,570
10/11/15 13:00:00 INFO hdfs.NNBench: 
10/11/15 13:00:00 INFO hdfs.NNBench:                         Test Operation: create_write
10/11/15 13:00:00 INFO hdfs.NNBench:                             Start time: 2010-11-15 12:59:47,598
10/11/15 13:00:00 INFO hdfs.NNBench:                            Maps to run: 1
10/11/15 13:00:00 INFO hdfs.NNBench:                         Reduces to run: 1
10/11/15 13:00:00 INFO hdfs.NNBench:                     Block Size (bytes): 1
10/11/15 13:00:00 INFO hdfs.NNBench:                         Bytes to write: 0
10/11/15 13:00:00 INFO hdfs.NNBench:                     Bytes per checksum: 1
10/11/15 13:00:00 INFO hdfs.NNBench:                        Number of files: 1
10/11/15 13:00:00 INFO hdfs.NNBench:                     Replication factor: 1
10/11/15 13:00:00 INFO hdfs.NNBench:             Successful file operations: 0
10/11/15 13:00:00 INFO hdfs.NNBench: 
10/11/15 13:00:00 INFO hdfs.NNBench:         # maps that missed the barrier: 0
10/11/15 13:00:00 INFO hdfs.NNBench:                           # exceptions: 1000
10/11/15 13:00:00 INFO hdfs.NNBench: 
10/11/15 13:00:00 INFO hdfs.NNBench:                TPS: Create/Write/Close: 0
10/11/15 13:00:00 INFO hdfs.NNBench: Avg exec time (ms): Create/Write/Close: Infinity
10/11/15 13:00:00 INFO hdfs.NNBench:             Avg Lat (ms): Create/Write: NaN
10/11/15 13:00:00 INFO hdfs.NNBench:                    Avg Lat (ms): Close: NaN
10/11/15 13:00:00 INFO hdfs.NNBench: 
10/11/15 13:00:00 INFO hdfs.NNBench:                  RAW DATA: AL Total #1: 0
10/11/15 13:00:00 INFO hdfs.NNBench:                  RAW DATA: AL Total #2: 0
10/11/15 13:00:00 INFO hdfs.NNBench:               RAW DATA: TPS Total (ms): 146
10/11/15 13:00:00 INFO hdfs.NNBench:        RAW DATA: Longest Map Time (ms): 146.0
10/11/15 13:00:00 INFO hdfs.NNBench:                    RAW DATA: Late maps: 0
10/11/15 13:00:00 INFO hdfs.NNBench:              RAW DATA: # of exceptions: 1000
10/11/15 13:00:00 INFO hdfs.NNBench: 

 
 ***********************************************************
 

logs from tasktracker. I found some error messages in it.


 ***********************************************************
 
2010-11-15 13:05:00,897 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201011151256_0002_m_000002_0 task's state:UNASSIGNED
2010-11-15 13:05:00,898 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201011151256_0002_m_000002_0 which needs 1 slots
2010-11-15 13:05:00,898 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 64 and trying to launch attempt_201011151256_0002_m_000002_0 which needs 1 slots
2010-11-15 13:05:00,898 INFO org.apache.hadoop.mapreduce.server.tasktracker.Localizer: User-directories for the user hadoop are already initialized on this TT. Not doing anything.
2010-11-15 13:05:01,167 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201011151256_0002_m_398522298
2010-11-15 13:05:01,167 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201011151256_0002_m_398522298 spawned.
2010-11-15 13:05:02,004 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201011151256_0002_m_398522298 given task: attempt_201011151256_0002_m_000002_0
2010-11-15 13:05:02,522 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_m_000002_0 0.0% setup > map
2010-11-15 13:05:02,523 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201011151256_0002_m_000002_0 is done.
2010-11-15 13:05:02,523 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201011151256_0002_m_000002_0  was -1
2010-11-15 13:05:02,526 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 64
2010-11-15 13:05:02,812 WARN org.apache.hadoop.mapreduce.util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -2477: No such process

2010-11-15 13:05:02,812 INFO org.apache.hadoop.mapreduce.util.ProcessTree: Sending signal to all members of process group -2477: SIGTERM. Exit code 1
2010-11-15 13:05:03,916 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201011151256_0002_m_000000_0 task's state:UNASSIGNED
2010-11-15 13:05:03,917 INFO org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task: attempt_201011151256_0002_m_000002_0
2010-11-15 13:05:03,917 INFO org.apache.hadoop.mapred.TaskTracker: About to purge task: attempt_201011151256_0002_m_000002_0
2010-11-15 13:05:03,918 INFO org.apache.hadoop.mapred.TaskRunner: attempt_201011151256_0002_m_000002_0 done; removing files.
2010-11-15 13:05:03,919 INFO org.apache.hadoop.mapred.IndexCache: Map ID attempt_201011151256_0002_m_000002_0 not found in cache
2010-11-15 13:05:03,920 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201011151256_0002_m_000000_0 which needs 1 slots
2010-11-15 13:05:03,920 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 64 and trying to launch attempt_201011151256_0002_m_000000_0 which needs 1 slots
2010-11-15 13:05:03,920 INFO org.apache.hadoop.mapreduce.server.tasktracker.Localizer: User-directories for the user hadoop are already initialized on this TT. Not doing anything.
2010-11-15 13:05:04,061 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201011151256_0002_m_-1524314987
2010-11-15 13:05:04,062 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201011151256_0002_m_-1524314987 spawned.
2010-11-15 13:05:04,926 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201011151256_0002_m_-1524314987 given task: attempt_201011151256_0002_m_000000_0
2010-11-15 13:05:07,885 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201011151256_0002_m_398522298 exited with exit code 0. Number of tasks it ran: 1
2010-11-15 13:05:11,386 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_m_000000_0 0.667% hdfs://guest0:64310/benchmarks/NNBench/control/NNBench_Controlfile_0:0+124 > map
2010-11-15 13:06:58,538 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_m_000000_0 0.667% hdfs://guest0:64310/benchmarks/NNBench/control/NNBench_Controlfile_0:0+124 > map
2010-11-15 13:06:58,560 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_m_000000_0 0.667% hdfs://guest0:64310/benchmarks/NNBench/control/NNBench_Controlfile_0:0+124 > sort
2010-11-15 13:06:58,560 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201011151256_0002_m_000000_0 is done.
2010-11-15 13:06:58,560 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201011151256_0002_m_000000_0  was 187
2010-11-15 13:06:58,561 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 64
2010-11-15 13:06:58,790 WARN org.apache.hadoop.mapreduce.util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -2518: No such process

2010-11-15 13:06:58,790 INFO org.apache.hadoop.mapreduce.util.ProcessTree: Sending signal to all members of process group -2518: SIGTERM. Exit code 1
2010-11-15 13:07:01,239 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201011151256_0002_r_000000_0 task's state:UNASSIGNED
2010-11-15 13:07:01,239 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201011151256_0002_r_000000_0 which needs 1 slots
2010-11-15 13:07:01,239 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 8 and trying to launch attempt_201011151256_0002_r_000000_0 which needs 1 slots
2010-11-15 13:07:01,239 INFO org.apache.hadoop.mapreduce.server.tasktracker.Localizer: User-directories for the user hadoop are already initialized on this TT. Not doing anything.
2010-11-15 13:07:01,314 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201011151256_0002_r_148464420
2010-11-15 13:07:01,315 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201011151256_0002_r_148464420 spawned.
2010-11-15 13:07:02,187 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201011151256_0002_r_148464420 given task: attempt_201011151256_0002_r_000000_0
2010-11-15 13:07:03,298 INFO org.apache.hadoop.mapred.TaskTracker: Sent out 187 bytes to reduce 0 from map: attempt_201011151256_0002_m_000000_0 given 187/183
2010-11-15 13:07:03,298 INFO org.apache.hadoop.mapred.TaskTracker: Shuffled 1maps (mapIds=attempt_201011151256_0002_m_000000_0) to reduce 0 in 3s
2010-11-15 13:07:03,299 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 192.168.122.21:50060, dest: 192.168.122.21:47877, maps: 1, op: MAPRED_SHUFFLE, reduceID: 0, duration: 3
2010-11-15 13:07:03,308 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_r_000000_0 0.0% 1 / 1 copied.
2010-11-15 13:07:03,342 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_r_000000_0 0.0% 1 / 1 copied.
2010-11-15 13:07:03,471 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201011151256_0002_r_000000_0 is in commit-pending, task state:COMMIT_PENDING
2010-11-15 13:07:03,471 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_r_000000_0 0.0% 1 / 1 copied.
2010-11-15 13:07:03,865 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201011151256_0002_m_-1524314987 exited with exit code 0. Number of tasks it ran: 1
2010-11-15 13:07:04,300 INFO org.apache.hadoop.mapred.TaskTracker: Received commit task action for attempt_201011151256_0002_r_000000_0
2010-11-15 13:07:04,503 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_r_000000_0 1.0% finished min:mapStartTimeTPmS ::host = guest2 > reduce
2010-11-15 13:07:04,510 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201011151256_0002_r_000000_0 is done.
2010-11-15 13:07:04,510 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201011151256_0002_r_000000_0  was -1
2010-11-15 13:07:04,512 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 8
2010-11-15 13:07:04,778 WARN org.apache.hadoop.mapreduce.util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -2570: No such process

2010-11-15 13:07:04,778 INFO org.apache.hadoop.mapreduce.util.ProcessTree: Sending signal to all members of process group -2570: SIGTERM. Exit code 1
2010-11-15 13:07:07,484 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201011151256_0002_m_000001_0 task's state:UNASSIGNED
2010-11-15 13:07:07,484 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201011151256_0002_m_000001_0 which needs 1 slots
2010-11-15 13:07:07,484 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 64 and trying to launch attempt_201011151256_0002_m_000001_0 which needs 1 slots
2010-11-15 13:07:07,484 INFO org.apache.hadoop.mapreduce.server.tasktracker.Localizer: User-directories for the user hadoop are already initialized on this TT. Not doing anything.
2010-11-15 13:07:07,512 INFO org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task: attempt_201011151256_0002_r_000000_0
2010-11-15 13:07:07,513 INFO org.apache.hadoop.mapred.TaskTracker: About to purge task: attempt_201011151256_0002_r_000000_0
2010-11-15 13:07:07,513 INFO org.apache.hadoop.mapred.TaskRunner: attempt_201011151256_0002_r_000000_0 done; removing files.
2010-11-15 13:07:07,584 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201011151256_0002_m_982149053
2010-11-15 13:07:07,585 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201011151256_0002_m_982149053 spawned.
2010-11-15 13:07:08,538 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201011151256_0002_m_982149053 given task: attempt_201011151256_0002_m_000001_0
2010-11-15 13:07:08,962 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_m_000001_0 0.0% 
2010-11-15 13:07:09,050 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201011151256_0002_m_000001_0 0.0% cleanup > map
2010-11-15 13:07:09,054 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201011151256_0002_m_000001_0 is done.
2010-11-15 13:07:09,054 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201011151256_0002_m_000001_0  was -1
2010-11-15 13:07:09,078 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 64
2010-11-15 13:07:09,183 WARN org.apache.hadoop.mapreduce.util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -2629: No such process

2010-11-15 13:07:09,183 INFO org.apache.hadoop.mapreduce.util.ProcessTree: Sending signal to all members of process group -2629: SIGTERM. Exit code 1
2010-11-15 13:07:09,800 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201011151256_0002_r_148464420 exited with exit code 0. Number of tasks it ran: 1
2010-11-15 13:07:10,573 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201011151256_0002
2010-11-15 13:07:10,574 INFO org.apache.hadoop.mapred.TaskRunner: attempt_201011151256_0002_m_000000_0 done; removing files.
2010-11-15 13:07:10,576 INFO org.apache.hadoop.mapred.TaskRunner: attempt_201011151256_0002_m_000001_0 done; removing files.
2010-11-15 13:07:10,577 INFO org.apache.hadoop.mapred.IndexCache: Map ID attempt_201011151256_0002_m_000001_0 not found in cache
2010-11-15 13:07:10,577 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201011151256_0002 for user-log deletion with retainTimeStamp:1289912830577
2010-11-15 13:07:14,206 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201011151256_0002_m_982149053 exited with exit code 0. Number of tasks it ran: 1


 
  ***********************************************************

namenode logs below.
 
  ***********************************************************

2010-11-15 13:04:58,811 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_7952178289902603971_1029 to 192.168.122.21:50010 
2010-11-15 13:04:58,811 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 59 Total time for transactions(ms): 2Number of transactions batched in Syncs: 1 Number of syncs: 45 SyncTimes(ms): 466 
2010-11-15 13:04:58,812 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=deletesrc=/benchmarks/NNBench/control  dst=null        perm=null
2010-11-15 13:04:58,813 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-5192109264753098555_1037 to 192.168.122.21:50010 
2010-11-15 13:04:58,814 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=deletesrc=/benchmarks/NNBench/output   dst=null        perm=null
2010-11-15 13:04:58,833 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=createsrc=/benchmarks/NNBench/control/NNBench_Controlfile_0    dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:58,851 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /benchmarks/NNBench/control/NNBench_Controlfile_0. blk_667327631922096856_1040{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:04:58,896 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_667327631922096856_1040{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:04:58,897 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /benchmarks/NNBench/control/NNBench_Controlfile_0 is closed by DFSClient_2064418803
2010-11-15 13:04:58,994 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=mkdirssrc=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002      dst=null        perm=hadoop:supergroup:rwxr-xr-x
2010-11-15 13:04:58,997 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=setPermission  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002     dst=null        perm=hadoop:supergroup:rwx------
2010-11-15 13:04:59,026 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=createsrc=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.jar      dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,068 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.jar. blk_-5982330171978228296_1041{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:04:59,102 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_-5982330171978228296_1041{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:04:59,104 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.jar is closed by DFSClient_2064418803
2010-11-15 13:04:59,106 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Increasing replication for file /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.jar. New replication is 10
2010-11-15 13:04:59,107 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=setReplication src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.jar     dst=null        perm=null
2010-11-15 13:04:59,109 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=setPermission  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.jar     dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,132 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=listStatus     src=/benchmarks/NNBench/control dst=null        perm=null
2010-11-15 13:04:59,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=open  src=/benchmarks/NNBench/control/NNBench_Controlfile_0    dst=null        perm=null
2010-11-15 13:04:59,151 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=createsrc=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.split    dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,153 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=setPermission  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.split   dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,156 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Increasing replication for file /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.split. New replication is 10
2010-11-15 13:04:59,156 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=setReplication src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.split   dst=null        perm=null
2010-11-15 13:04:59,158 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.split. blk_-8949513092476944942_1042{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:04:59,162 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_-8949513092476944942_1042{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:04:59,164 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.split is closed by DFSClient_2064418803
2010-11-15 13:04:59,173 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=createsrc=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.splitmetainfo    dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,549 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=setPermission  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.splitmetainfo   dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,554 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.splitmetainfo. blk_4045193522256115073_1043{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:04:59,561 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_4045193522256115073_1043{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:04:59,564 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.splitmetainfo is closed by DFSClient_2064418803
2010-11-15 13:04:59,574 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=createsrc=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.xml      dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,578 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=setPermission  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.xml     dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,700 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.xml. blk_-4637967787808166403_1044{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:04:59,705 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_-4637967787808166403_1044{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:04:59,707 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.xml is closed by DFSClient_2064418803
2010-11-15 13:04:59,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=open  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.xml      dst=null        perm=null
2010-11-15 13:04:59,756 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=mkdirssrc=/home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002     dst=null        perm=hadoop:supergroup:rwxr-xr-x
2010-11-15 13:04:59,757 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=setPermission  src=/home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002    dst=null        perm=hadoop:supergroup:rwx------
2010-11-15 13:04:59,763 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=createsrc=/home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/job-info    dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,765 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/job-info. blk_-5341196869405167887_1045{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:04:59,773 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_-5341196869405167887_1045{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:04:59,773 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/job-info is closed by DFSClient_-827684356
2010-11-15 13:04:59,823 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=createsrc=/home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/jobToken    dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:04:59,825 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/jobToken. blk_6492009810912882763_1046{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:04:59,829 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_6492009810912882763_1046{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:04:59,831 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/jobToken is closed by DFSClient_-827684356
2010-11-15 13:04:59,834 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=open  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.splitmetainfo    dst=null        perm=null
2010-11-15 13:05:00,194 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 192.168.122.21:50010 to delete  blk_7952178289902603971_1029 blk_-5192109264753098555_1037
2010-11-15 13:05:00,905 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=open  src=/home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/jobToken    dst=null        perm=null
2010-11-15 13:05:00,939 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=open  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.xml      dst=null        perm=null
2010-11-15 13:05:01,019 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=open  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.jar      dst=null        perm=null
2010-11-15 13:05:02,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=mkdirssrc=/benchmarks/NNBench/output/_temporary        dst=null        perm=hadoop:supergroup:rwxr-xr-x
2010-11-15 13:05:05,368 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=open  src=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002/job.split    dst=null        perm=null
2010-11-15 13:05:05,405 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=open  src=/benchmarks/NNBench/control/NNBench_Controlfile_0    dst=null        perm=null
2010-11-15 13:07:03,358 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 95 Total time for transactions(ms): 3Number of transactions batched in Syncs: 1 Number of syncs: 72 SyncTimes(ms): 871 
2010-11-15 13:07:03,359 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=mkdirssrc=/benchmarks/NNBench/output/_temporary/_attempt_201011151256_0002_r_000000_0  dst=null        perm=hadoop:supergroup:rwxr-xr-x
2010-11-15 13:07:03,363 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=createsrc=/benchmarks/NNBench/output/_temporary/_attempt_201011151256_0002_r_000000_0/part-00000       dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:07:03,376 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /benchmarks/NNBench/output/_temporary/_attempt_201011151256_0002_r_000000_0/part-00000. blk_-4563027756382672617_1047{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:07:03,461 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_-4563027756382672617_1047{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:07:03,463 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /benchmarks/NNBench/output/_temporary/_attempt_201011151256_0002_r_000000_0/part-00000 is closed by DFSClient_attempt_201011151256_0002_r_000000_0
2010-11-15 13:07:04,484 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=listStatus     src=/benchmarks/NNBench/output/_temporary/_attempt_201011151256_0002_r_000000_0 dst=null        perm=null
2010-11-15 13:07:04,489 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=mkdirssrc=/benchmarks/NNBench/output   dst=null        perm=hadoop:supergroup:rwxr-xr-x
2010-11-15 13:07:04,494 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=renamesrc=/benchmarks/NNBench/output/_temporary/_attempt_201011151256_0002_r_000000_0/part-00000       dst=/benchmarks/NNBench/output/part-00000
 perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:07:04,497 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=deletesrc=/benchmarks/NNBench/output/_temporary/_attempt_201011151256_0002_r_000000_0  dst=null        perm=null
2010-11-15 13:07:08,968 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=deletesrc=/benchmarks/NNBench/output/_temporary        dst=null        perm=null
2010-11-15 13:07:08,975 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=createsrc=/benchmarks/NNBench/output/_SUCCESS  dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:07:08,982 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /benchmarks/NNBench/output/_SUCCESS is closed by DFSClient_attempt_201011151256_0002_m_000001_0
2010-11-15 13:07:09,034 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-5982330171978228296_1041 to 192.168.122.21:50010 
2010-11-15 13:07:09,034 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-8949513092476944942_1042 to 192.168.122.21:50010 
2010-11-15 13:07:09,034 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_4045193522256115073_1043 to 192.168.122.21:50010 
2010-11-15 13:07:09,034 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-4637967787808166403_1044 to 192.168.122.21:50010 
2010-11-15 13:07:09,035 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.21      cmd=deletesrc=/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201011151256_0002      dst=null        perm=null
2010-11-15 13:07:09,374 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 192.168.122.21:50010 to delete  blk_4045193522256115073_1043 blk_-8949513092476944942_1042 blk_-5982330171978228296_1041 blk_-4637967787808166403_1044
2010-11-15 13:07:10,512 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=createsrc=/jobtracker/jobsInfo/job_201011151256_0002.info      dst=null        perm=hadoop:supergroup:rw-r--r--
2010-11-15 13:07:10,517 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /jobtracker/jobsInfo/job_201011151256_0002.info. blk_-63779571351333019_1049{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
2010-11-15 13:07:10,533 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 192.168.122.21:50010 is added to blk_-63779571351333019_1049{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.122.21:50010|RBW]]}
 size 0
2010-11-15 13:07:10,535 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /jobtracker/jobsInfo/job_201011151256_0002.info is closed by DFSClient_-827684356
2010-11-15 13:07:10,600 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-5341196869405167887_1045 to 192.168.122.21:50010 
2010-11-15 13:07:10,600 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_6492009810912882763_1046 to 192.168.122.21:50010 
2010-11-15 13:07:10,601 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=deletesrc=/home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002     dst=null        perm=null
2010-11-15 13:07:11,418 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=open  src=/jobtracker/jobsInfo/job_201011151256_0002.info      dst=null        perm=null
2010-11-15 13:07:11,433 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.212     cmd=open  src=/jobtracker/jobsInfo/job_201011151256_0002.info      dst=null        perm=null
2010-11-15 13:07:11,475 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop      ip=/192.168.122.242     cmd=open  src=/benchmarks/NNBench/output/part-00000        dst=null        perm=null
2010-11-15 13:07:12,406 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 192.168.122.21:50010 to delete  blk_-5341196869405167887_1045 blk_6492009810912882763_1046


**********************************************

jobtracker logs below

***********************************************

2010-11-15 13:04:59,776 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201011151256_0002
2010-11-15 13:04:59,776 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201011151256_0002
2010-11-15 13:04:59,783 INFO org.apache.hadoop.mapreduce.jobhistory.JobHistory: SetupWriter, creating file file:/home/hadoop/hadoop-dist2/hadoop-0.21.0/logs/history/job_201011151256_0002_hadoop
2010-11-15 13:04:59,793 INFO org.apache.hadoop.mapred.JobTracker: Job job_201011151256_0002 added successfully for user 'hadoop' to queue 'default'
2010-11-15 13:04:59,803 INFO org.apache.hadoop.mapreduce.jobhistory.JobHistory: LogDirConfPath is file:/home/hadoop/hadoop-dist2/hadoop-0.21.0/logs/history/job_201011151256_0002_conf.xml
2010-11-15 13:04:59,832 INFO org.apache.hadoop.mapred.JobInProgress: jobToken generated and stored with users keys in /home/hadoop/hadoop-dist2/filesystem/mapred/system/job_201011151256_0002/jobToken
2010-11-15 13:04:59,837 INFO org.apache.hadoop.mapred.JobInProgress: Input size for job job_201011151256_0002 = 124. Number of splits = 1
2010-11-15 13:04:59,837 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201011151256_0002_m_000000 has split on node:/default-rack/guest2
2010-11-15 13:04:59,837 INFO org.apache.hadoop.mapred.JobInProgress: Job job_201011151256_0002 initialized successfully with 1 map tasks and 1 reduce tasks.
2010-11-15 13:05:00,894 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_201011151256_0002_m_000002_0' to tip task_201011151256_0002_m_000002, for tracker 'tracker_guest2:localhost.localdomain/127.0.0.1:41953'
2010-11-15 13:05:03,908 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201011151256_0002_m_000002_0' has completed task_201011151256_0002_m_000002 successfully.
2010-11-15 13:05:03,912 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201011151256_0002_m_000000_0' to tip task_201011151256_0002_m_000000, for tracker 'tracker_guest2:localhost.localdomain/127.0.0.1:41953'
2010-11-15 13:05:03,913 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201011151256_0002_m_000000
2010-11-15 13:07:01,232 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201011151256_0002_m_000000_0' has completed task_201011151256_0002_m_000000 successfully.
2010-11-15 13:07:01,236 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201011151256_0002_r_000000_0' to tip task_201011151256_0002_r_000000, for tracker 'tracker_guest2:localhost.localdomain/127.0.0.1:41953'
2010-11-15 13:07:07,476 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201011151256_0002_r_000000_0' has completed task_201011151256_0002_r_000000 successfully.
2010-11-15 13:07:07,481 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_201011151256_0002_m_000001_0' to tip task_201011151256_0002_m_000001, for tracker 'tracker_guest2:localhost.localdomain/127.0.0.1:41953'
2010-11-15 13:07:10,496 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201011151256_0002_m_000001_0' has completed task_201011151256_0002_m_000001 successfully.
2010-11-15 13:07:10,499 INFO org.apache.hadoop.mapred.JobInProgress: Job job_201011151256_0002 has completed successfully.
2010-11-15 13:07:10,500 INFO org.apache.hadoop.mapred.JobInProgress$JobSummary:
 jobId=job_201011151256_0002,submitTime=1289826299709,launchTime=1289826299837,finishTime=1289826430499,numMaps=1,numSlotsPerMap=1,numReduces=1,numSlotsPerReduce=1,user=hadoop,queue=default,status=SUCCEEDED,mapSlotSeconds=117,reduceSlotsSeconds=3,clusterMapCapacity=64,clusterReduceCapacity=8
2010-11-15 13:07:10,571 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201011151256_0002_m_000000_0'
2010-11-15 13:07:10,572 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201011151256_0002_m_000001_0'
2010-11-15 13:07:10,572 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201011151256_0002_m_000002_0'
2010-11-15 13:07:10,572 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201011151256_0002_r_000000_0'
2010-11-15 13:07:10,573 INFO org.apache.hadoop.mapreduce.jobhistory.JobHistory: Moving file:/home/hadoop/hadoop-dist2/hadoop-0.21.0/logs/history/job_201011151256_0002_hadoop to file:/home/hadoop/hadoop-dist2/hadoop-0.21.0/logs/history/done/job_201011151256_0002_hadoop
2010-11-15 13:07:10,616 INFO org.apache.hadoop.mapreduce.jobhistory.JobHistory: Moving file:/home/hadoop/hadoop-dist2/hadoop-0.21.0/logs/history/job_201011151256_0002_conf.xml to file:/home/hadoop/hadoop-dist2/hadoop-0.21.0/logs/history/done/job_201011151256_0002_conf.xml
2010-11-15 13:07:10,646 INFO org.apache.hadoop.mapred.JobInProgress: Deleting localized job conf at /home/hadoop/hadoop-dist2/hadoop-0.21.0/bin/../logs/job_201011151256_0002_conf.xml
2010-11-15 13:07:10,647 INFO org.apache.hadoop.mapred.JobTracker: Retired job with id: 'job_201011151256_0002' of user 'hadoop'


******************************************

datanode logs

*********************************************

2010-11-15 13:04:58,890 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_667327631922096856_1040 src: /192.168.122.242:39421 dest: /192.168.122.21:50010
2010-11-15 13:04:58,895 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.242:39421, dest: /192.168.122.21:50010, bytes: 124, op: HDFS_WRITE, cliID: DFSClient_2064418803, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_667327631922096856_1040, duration: 2570000
2010-11-15 13:04:58,895 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_667327631922096856_1040 terminating
2010-11-15 13:04:59,070 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5982330171978228296_1041 src: /192.168.122.242:39423 dest: /192.168.122.21:50010
2010-11-15 13:04:59,101 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.242:39423, dest: /192.168.122.21:50010, bytes: 1492025, op: HDFS_WRITE, cliID: DFSClient_2064418803, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-5982330171978228296_1041, duration: 28422000
2010-11-15 13:04:59,101 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-5982330171978228296_1041 terminating
2010-11-15 13:04:59,159 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-8949513092476944942_1042 src: /192.168.122.242:39424 dest: /192.168.122.21:50010
2010-11-15 13:04:59,162 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.242:39424, dest: /192.168.122.21:50010, bytes: 127, op: HDFS_WRITE, cliID: DFSClient_2064418803, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-8949513092476944942_1042, duration: 1103000
2010-11-15 13:04:59,162 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-8949513092476944942_1042 terminating
2010-11-15 13:04:59,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_4045193522256115073_1043 src: /192.168.122.242:39425 dest: /192.168.122.21:50010
2010-11-15 13:04:59,561 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.242:39425, dest: /192.168.122.21:50010, bytes: 20, op: HDFS_WRITE, cliID: DFSClient_2064418803, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_4045193522256115073_1043, duration: 2213000
2010-11-15 13:04:59,562 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_4045193522256115073_1043 terminating
2010-11-15 13:04:59,700 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-4637967787808166403_1044 src: /192.168.122.242:39426 dest: /192.168.122.21:50010
2010-11-15 13:04:59,704 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.242:39426, dest: /192.168.122.21:50010, bytes: 33698, op: HDFS_WRITE, cliID: DFSClient_2064418803, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-4637967787808166403_1044, duration: 1968000
2010-11-15 13:04:59,705 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-4637967787808166403_1044 terminating
2010-11-15 13:04:59,725 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.212:48816, bytes: 33962, op: HDFS_READ, cliID: DFSClient_-2033075547, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-4637967787808166403_1044, duration: 647000
2010-11-15 13:04:59,766 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5341196869405167887_1045 src: /192.168.122.212:48817 dest: /192.168.122.21:50010
2010-11-15 13:04:59,769 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.212:48817, dest: /192.168.122.21:50010, bytes: 118, op: HDFS_WRITE, cliID: DFSClient_-827684356, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-5341196869405167887_1045, duration: 1228000
2010-11-15 13:04:59,769 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-5341196869405167887_1045 terminating
2010-11-15 13:04:59,826 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_6492009810912882763_1046 src: /192.168.122.212:48818 dest: /192.168.122.21:50010
2010-11-15 13:04:59,829 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.212:48818, dest: /192.168.122.21:50010, bytes: 101, op: HDFS_WRITE, cliID: DFSClient_-827684356, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_6492009810912882763_1046, duration: 1146000
2010-11-15 13:04:59,829 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_6492009810912882763_1046 terminating
2010-11-15 13:04:59,836 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.212:48819, bytes: 24, op: HDFS_READ, cliID: DFSClient_-2033075547, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_4045193522256115073_1043, duration: 288000
2010-11-15 13:05:00,925 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.21:45356, bytes: 105, op: HDFS_READ, cliID: DFSClient_275878657, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_6492009810912882763_1046, duration: 231000
2010-11-15 13:05:00,954 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.21:45358, bytes: 33962, op: HDFS_READ, cliID: DFSClient_-1502261171, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-4637967787808166403_1044, duration: 265000
2010-11-15 13:05:01,074 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.21:45359, bytes: 1503685, op: HDFS_READ, cliID: DFSClient_-1502261171, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-5982330171978228296_1041, duration: 37232000
2010-11-15 13:05:02,679 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_-5192109264753098555_1037 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-5192109264753098555 for deletion
2010-11-15 13:05:02,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_7952178289902603971_1029 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_7952178289902603971 for deletion
2010-11-15 13:05:02,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_-5192109264753098555_1037 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-5192109264753098555
2010-11-15 13:05:02,690 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_7952178289902603971_1029 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_7952178289902603971
2010-11-15 13:05:05,379 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.21:45364, bytes: 131, op: HDFS_READ, cliID: DFSClient_attempt_201011151256_0002_m_000000_0, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460,
 blockid: blk_-8949513092476944942_1042, duration: 235000
2010-11-15 13:05:05,410 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.21:45365, bytes: 128, op: HDFS_READ, cliID: DFSClient_attempt_201011151256_0002_m_000000_0, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460,
 blockid: blk_667327631922096856_1040, duration: 232000
2010-11-15 13:07:03,456 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-4563027756382672617_1047 src: /192.168.122.21:45370 dest: /192.168.122.21:50010
2010-11-15 13:07:03,460 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:45370, dest: /192.168.122.21:50010, bytes: 167, op: HDFS_WRITE, cliID: DFSClient_attempt_201011151256_0002_r_000000_0, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460,
 blockid: blk_-4563027756382672617_1047, duration: 2622000
2010-11-15 13:07:03,460 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-4563027756382672617_1047 terminating
2010-11-15 13:07:10,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-63779571351333019_1049 src: /192.168.122.212:48840 dest: /192.168.122.21:50010
2010-11-15 13:07:10,532 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.212:48840, dest: /192.168.122.21:50010, bytes: 2113, op: HDFS_WRITE, cliID: DFSClient_-827684356, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-63779571351333019_1049, duration: 10257000
2010-11-15 13:07:10,533 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-63779571351333019_1049 terminating
2010-11-15 13:07:11,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.212:48841, bytes: 2133, op: HDFS_READ, cliID: DFSClient_-827684356, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-63779571351333019_1049, duration: 684000
2010-11-15 13:07:11,436 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.212:48842, bytes: 2133, op: HDFS_READ, cliID: DFSClient_-827684356, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-63779571351333019_1049, duration: 553000
2010-11-15 13:07:11,483 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.122.21:50010, dest: /192.168.122.242:42366, bytes: 171, op: HDFS_READ, cliID: DFSClient_2064418803, offset: 0, srvID: DS-2050198494-192.168.122.21-50010-1289741482460, blockid:
 blk_-4563027756382672617_1047, duration: 526000
2010-11-15 13:07:11,872 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_-8949513092476944942_1042 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-8949513092476944942 for deletion
2010-11-15 13:07:11,873 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_-5982330171978228296_1041 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-5982330171978228296 for deletion
2010-11-15 13:07:11,873 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_-4637967787808166403_1044 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-4637967787808166403 for deletion
2010-11-15 13:07:11,873 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_4045193522256115073_1043 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_4045193522256115073 for deletion
2010-11-15 13:07:11,874 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_-8949513092476944942_1042 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-8949513092476944942
2010-11-15 13:07:11,875 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_-5982330171978228296_1041 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-5982330171978228296
2010-11-15 13:07:11,875 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_-4637967787808166403_1044 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-4637967787808166403
2010-11-15 13:07:11,875 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_4045193522256115073_1043 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_4045193522256115073
2010-11-15 13:07:14,876 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_-5341196869405167887_1045 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-5341196869405167887 for deletion
2010-11-15 13:07:14,877 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_6492009810912882763_1046 file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_6492009810912882763 for deletion
2010-11-15 13:07:14,877 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_-5341196869405167887_1045 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_-5341196869405167887
2010-11-15 13:07:14,877 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_6492009810912882763_1046 at file /home/hadoop/hadoop-dist2/filesystem/data/current/finalized/blk_6492009810912882763
2010-11-15 13:07:17,759 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6610282913947390908_1017

 

 
---------------------------------
Get the new Internet Explorer 8 optimized for Yahoo! JAPAN