You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-commits@hadoop.apache.org by sh...@apache.org on 2009/09/21 01:02:19 UTC
svn commit: r817119 [1/3] - in /hadoop/hdfs/branches/HDFS-265: ./
.eclipse.templates/.launches/ lib/ src/contrib/block_forensics/
src/contrib/block_forensics/client/ src/contrib/block_forensics/ivy/
src/contrib/block_forensics/src/ src/contrib/block_fo...
Author: shv
Date: Sun Sep 20 23:02:16 2009
New Revision: 817119
URL: http://svn.apache.org/viewvc?rev=817119&view=rev
Log:
HDFS-636. Merge -r 815965:816988 from trunk to the append branch.
Added:
hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/
hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/AllTests.launch (with props)
hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/DataNode.launch (with props)
hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/NameNode.launch (with props)
hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/SpecificTestTemplate.launch (with props)
hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-0.22.0-dev.jar (with props)
hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-test-0.22.0-dev.jar (with props)
hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.22.0-dev.jar (with props)
hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.22.0-dev.jar (with props)
hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.22.0-dev.jar (with props)
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/README
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/build.xml (with props)
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/client/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/client/BlockForensics.java (with props)
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy.xml (with props)
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy/libraries.properties (with props)
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/block_forensics/
hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/block_forensics/BlockSearch.java (with props)
hadoop/hdfs/branches/HDFS-265/src/test/hdfs/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java (with props)
Removed:
hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-0.21.0-dev.jar
hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-test-0.21.0-dev.jar
hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.21.0-dev.jar
hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.21.0-dev.jar
hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.21.0-dev.jar
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/capacity_scheduler.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/cluster_setup.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/commands_manual.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/distcp.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/fair_scheduler.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hadoop_archives.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hdfs_shell.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hod_admin_guide.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hod_config_guide.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hod_user_guide.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/mapred_tutorial.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/native_libraries.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/quickstart.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/service_level_auth.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/streaming.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/vaidya.xml
hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/ReplicationTargetChooser.java
Modified:
hadoop/hdfs/branches/HDFS-265/.gitignore
hadoop/hdfs/branches/HDFS-265/CHANGES.txt
hadoop/hdfs/branches/HDFS-265/build.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/SLG_user_guide.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/faultinject_framework.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hdfs_design.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/index.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/libhdfs.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/site.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/tabs.xml
hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/skinconf.xml
hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/DFSClient.java
hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/HftpFileSystem.java
hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicy.java
hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyDefault.java
hadoop/hdfs/branches/HDFS-265/src/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
Added: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/AllTests.launch
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/AllTests.launch?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/AllTests.launch (added)
+++ hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/AllTests.launch Sun Sep 20 23:02:16 2009
@@ -0,0 +1,28 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<launchConfiguration type="org.eclipse.jdt.junit.launchconfig">
+<stringAttribute key="bad_container_name" value="/@PROJECT@/.l"/>
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS">
+<listEntry value="/@PROJECT@"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES">
+<listEntry value="4"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.ui.favoriteGroups">
+<listEntry value="org.eclipse.debug.ui.launchGroup.debug"/>
+<listEntry value="org.eclipse.debug.ui.launchGroup.run"/>
+</listAttribute>
+<stringAttribute key="org.eclipse.jdt.junit.CONTAINER" value="=@PROJECT@"/>
+<booleanAttribute key="org.eclipse.jdt.junit.KEEPRUNNING_ATTR" value="false"/>
+<stringAttribute key="org.eclipse.jdt.junit.TESTNAME" value=""/>
+<stringAttribute key="org.eclipse.jdt.junit.TEST_KIND" value="org.eclipse.jdt.junit.loader.junit4"/>
+<listAttribute key="org.eclipse.jdt.launching.CLASSPATH">
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry containerPath="org.eclipse.jdt.launching.JRE_CONTAINER" javaProject="@PROJECT@" path="1" type="4"/> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build/classes}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.defaultClasspath"> <memento exportedEntriesOnly="false" project="@PROJECT@"/> </runtimeClasspathEntry> "/>
+</listAttribute>
+<booleanAttribute key="org.eclipse.jdt.launching.DEFAULT_CLASSPATH" value="false"/>
+<stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value=""/>
+<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="@PROJECT@"/>
+<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Xms256m -Xmx512m -Dtest.build.data=${workspace_loc:@PROJECT@}/build/test -Dtest.cache.data=${workspace_loc:@PROJECT@}/build/test/cache -Dtest.debug.data=${workspace_loc:@PROJECT@}/build/test/debug -Dhadoop.log.dir=${workspace_loc:@PROJECT@}/build/test/log -Dtest.src.dir=${workspace_loc:@PROJECT@}/build/test/src -Dtest.build.extraconf=${workspace_loc:@PROJECT@}/build/test/extraconf -Dhadoop.policy.file=hadoop-policy.xml"/>
+</launchConfiguration>
Propchange: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/AllTests.launch
------------------------------------------------------------------------------
svn:mime-type = text/plain
Added: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/DataNode.launch
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/DataNode.launch?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/DataNode.launch (added)
+++ hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/DataNode.launch Sun Sep 20 23:02:16 2009
@@ -0,0 +1,24 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<launchConfiguration type="org.eclipse.jdt.launching.localJavaApplication">
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS">
+<listEntry value="/@PROJECT@/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES">
+<listEntry value="1"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.ui.favoriteGroups">
+<listEntry value="org.eclipse.debug.ui.launchGroup.run"/>
+<listEntry value="org.eclipse.debug.ui.launchGroup.debug"/>
+</listAttribute>
+<listAttribute key="org.eclipse.jdt.launching.CLASSPATH">
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry containerPath="org.eclipse.jdt.launching.JRE_CONTAINER" javaProject="@PROJECT@" path="1" type="4"/> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/conf}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build/classes}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.defaultClasspath"> <memento exportedEntriesOnly="false" project="@PROJECT@"/> </runtimeClasspathEntry> "/>
+</listAttribute>
+<booleanAttribute key="org.eclipse.jdt.launching.DEFAULT_CLASSPATH" value="false"/>
+<stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value="org.apache.hadoop.hdfs.server.datanode.DataNode"/>
+<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="@PROJECT@"/>
+<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Xmx1000m -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=${workspace_loc:@PROJECT@}"/>
+</launchConfiguration>
Propchange: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/DataNode.launch
------------------------------------------------------------------------------
svn:mime-type = text/plain
Added: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/NameNode.launch
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/NameNode.launch?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/NameNode.launch (added)
+++ hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/NameNode.launch Sun Sep 20 23:02:16 2009
@@ -0,0 +1,24 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<launchConfiguration type="org.eclipse.jdt.launching.localJavaApplication">
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS">
+<listEntry value="/@PROJECT@/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES">
+<listEntry value="1"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.ui.favoriteGroups">
+<listEntry value="org.eclipse.debug.ui.launchGroup.run"/>
+<listEntry value="org.eclipse.debug.ui.launchGroup.debug"/>
+</listAttribute>
+<listAttribute key="org.eclipse.jdt.launching.CLASSPATH">
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry containerPath="org.eclipse.jdt.launching.JRE_CONTAINER" javaProject="@PROJECT@" path="1" type="4"/> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/conf}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build/classes}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.defaultClasspath"> <memento exportedEntriesOnly="false" project="@PROJECT@"/> </runtimeClasspathEntry> "/>
+</listAttribute>
+<booleanAttribute key="org.eclipse.jdt.launching.DEFAULT_CLASSPATH" value="false"/>
+<stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value="org.apache.hadoop.hdfs.server.namenode.NameNode"/>
+<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="@PROJECT@"/>
+<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Xmx1000m -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=${workspace_loc:@PROJECT@}"/>
+</launchConfiguration>
Propchange: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/NameNode.launch
------------------------------------------------------------------------------
svn:mime-type = text/plain
Added: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/SpecificTestTemplate.launch
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/SpecificTestTemplate.launch?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/SpecificTestTemplate.launch (added)
+++ hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/SpecificTestTemplate.launch Sun Sep 20 23:02:16 2009
@@ -0,0 +1,28 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<launchConfiguration type="org.eclipse.jdt.junit.launchconfig">
+<stringAttribute key="bad_container_name" value="/@PROJECT@/.l"/>
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS">
+<listEntry value="/@PROJECT@"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES">
+<listEntry value="4"/>
+</listAttribute>
+<listAttribute key="org.eclipse.debug.ui.favoriteGroups">
+<listEntry value="org.eclipse.debug.ui.launchGroup.run"/>
+<listEntry value="org.eclipse.debug.ui.launchGroup.debug"/>
+</listAttribute>
+<stringAttribute key="org.eclipse.jdt.junit.CONTAINER" value=""/>
+<booleanAttribute key="org.eclipse.jdt.junit.KEEPRUNNING_ATTR" value="false"/>
+<stringAttribute key="org.eclipse.jdt.junit.TESTNAME" value=""/>
+<stringAttribute key="org.eclipse.jdt.junit.TEST_KIND" value="org.eclipse.jdt.junit.loader.junit4"/>
+<listAttribute key="org.eclipse.jdt.launching.CLASSPATH">
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry containerPath="org.eclipse.jdt.launching.JRE_CONTAINER" javaProject="@PROJECT@" path="1" type="4"/> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.variableClasspathEntry"> <memento path="3" variableString="${workspace_loc:@PROJECT@/build/classes}"/> </runtimeClasspathEntry> "/>
+<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry id="org.eclipse.jdt.launching.classpathentry.defaultClasspath"> <memento exportedEntriesOnly="false" project="@PROJECT@"/> </runtimeClasspathEntry> "/>
+</listAttribute>
+<booleanAttribute key="org.eclipse.jdt.launching.DEFAULT_CLASSPATH" value="false"/>
+<stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value="org.apache.hadoop.TestNameHere"/>
+<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="@PROJECT@"/>
+<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Xms256m -Xmx512m -Dtest.build.data=${workspace_loc:@PROJECT@}/build/test -Dtest.cache.data=${workspace_loc:@PROJECT@}/build/test/cache -Dtest.debug.data=${workspace_loc:@PROJECT@}/build/test/debug -Dhadoop.log.dir=${workspace_loc:@PROJECT@}/build/test/log -Dtest.src.dir=${workspace_loc:@PROJECT@}/build/test/src -Dtest.build.extraconf=${workspace_loc:@PROJECT@}/build/test/extraconf -Dhadoop.policy.file=hadoop-policy.xml"/>
+</launchConfiguration>
Propchange: hadoop/hdfs/branches/HDFS-265/.eclipse.templates/.launches/SpecificTestTemplate.launch
------------------------------------------------------------------------------
svn:mime-type = text/plain
Modified: hadoop/hdfs/branches/HDFS-265/.gitignore
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/.gitignore?rev=817119&r1=817118&r2=817119&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/.gitignore (original)
+++ hadoop/hdfs/branches/HDFS-265/.gitignore Sun Sep 20 23:02:16 2009
@@ -16,6 +16,7 @@
*~
.classpath
.project
+.launches/
.settings
.svn
build/
Modified: hadoop/hdfs/branches/HDFS-265/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/CHANGES.txt?rev=817119&r1=817118&r2=817119&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/CHANGES.txt (original)
+++ hadoop/hdfs/branches/HDFS-265/CHANGES.txt Sun Sep 20 23:02:16 2009
@@ -61,6 +61,22 @@
INCOMPATIBLE CHANGES
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HDFS-629. Remove ReplicationTargetChooser.java along with fixing
+ import warnings generated by Eclipse. (dhruba)
+
+
+Release 0.21.0 - Unreleased
+
+ INCOMPATIBLE CHANGES
+
HDFS-538. Per the contract elucidated in HADOOP-6201, throw
FileNotFoundException from FileSystem::listStatus rather than returning
null. (Jakob Homan via cdouglas)
@@ -97,6 +113,15 @@
HDFS-385. Add support for an experimental API that allows a module external
to HDFS to specify how HDFS blocks should be placed. (dhruba)
+ HADOOP-4952. Update hadoop-core and test jars to propagate new FileContext
+ file system application interface. (Sanjay Radia via suresh).
+
+ HDFS-567. Add block forensics contrib tool to print history of corrupt and
+ missing blocks from the HDFS logs.
+ (Bill Zeller, Jithendra Pandey via suresh).
+
+ HDFS-610. Support o.a.h.fs.FileContext. (Sanjay Radia via szetszwo)
+
IMPROVEMENTS
HDFS-381. Remove blocks from DataNode maps when corresponding file
@@ -216,6 +241,11 @@
HDFS-618. Support non-recursive mkdir(). (Kan Zhang via szetszwo)
+ HDFS-574. Split the documentation between the subprojects.
+ (Corinne Chandel via omalley)
+
+ HDFS-598. Eclipse launch task for HDFS. (Eli Collins via tomwhite)
+
BUG FIXES
HDFS-76. Better error message to users when commands fail because of
@@ -306,7 +336,7 @@
HDFS-622. checkMinReplication should count live nodes only. (shv)
-Release 0.20.1 - Unreleased
+Release 0.20.1 - 2009-09-01
IMPROVEMENTS
Modified: hadoop/hdfs/branches/HDFS-265/build.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/build.xml?rev=817119&r1=817118&r2=817119&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/build.xml (original)
+++ hadoop/hdfs/branches/HDFS-265/build.xml Sun Sep 20 23:02:16 2009
@@ -27,9 +27,9 @@
<property name="Name" value="Hadoop-Hdfs"/>
<property name="name" value="hadoop-hdfs"/>
- <property name="version" value="0.21.0-dev"/>
- <property name="hadoop-core.version" value="0.21.0-dev"/>
- <property name="hadoop-mr.version" value="0.21.0-dev"/>
+ <property name="version" value="0.22.0-dev"/>
+ <property name="hadoop-core.version" value="0.22.0-dev"/>
+ <property name="hadoop-mr.version" value="0.22.0-dev"/>
<property name="final.name" value="${name}-${version}"/>
<property name="test.hdfs.final.name" value="${name}-test-${version}"/>
<property name="test.hdfswithmr.final.name" value="${name}-hdfswithmr-test-${version}"/>
@@ -43,7 +43,6 @@
<property name="conf.dir" value="${basedir}/conf"/>
<property name="contrib.dir" value="${basedir}/src/contrib"/>
<property name="docs.src" value="${basedir}/src/docs"/>
- <property name="src.docs.cn" value="${basedir}/src/docs/cn"/>
<property name="changes.src" value="${docs.src}/changes"/>
<property name="build.dir" value="${basedir}/build"/>
@@ -63,7 +62,6 @@
value="${sun.arch.data.model}"/>
<property name="build.docs" value="${build.dir}/docs"/>
- <property name="build.docs.cn" value="${build.dir}/docs/cn"/>
<property name="build.javadoc" value="${build.docs}/api"/>
<property name="build.javadoc.timestamp" value="${build.javadoc}/index.html" />
<property name="build.javadoc.dev" value="${build.docs}/dev-api"/>
@@ -821,22 +819,6 @@
<style basedir="${hdfs.src.dir}" destdir="${build.docs}"
includes="hdfs-default.xml" style="conf/configuration.xsl"/>
<antcall target="changes-to-html"/>
- <antcall target="cn-docs"/>
- </target>
-
- <target name="cn-docs" depends="forrest.check, init"
- description="Generate forrest-based Chinese documentation. To use, specify -Dforrest.home=<base of Apache Forrest installation> on the command line."
- if="forrest.home">
- <exec dir="${src.docs.cn}" executable="${forrest.home}/bin/forrest" failonerror="true">
- <env key="LANG" value="en_US.utf8"/>
- <env key="JAVA_HOME" value="${java5.home}"/>
- </exec>
- <copy todir="${build.docs.cn}">
- <fileset dir="${src.docs.cn}/build/site/" />
- </copy>
- <style basedir="${hdfs.src.dir}" destdir="${build.docs.cn}"
- includes="hdfs-default.xml" style="conf/configuration.xsl"/>
- <antcall target="changes-to-html"/>
</target>
<target name="forrest.check" unless="forrest.home" depends="java5.check">
@@ -1150,7 +1132,6 @@
<delete dir="${build.dir}"/>
<delete dir="${build-fi.dir}"/>
<delete dir="${docs.src}/build"/>
- <delete dir="${src.docs.cn}/build"/>
</target>
<!-- ================================================================== -->
Added: hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-0.22.0-dev.jar
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-0.22.0-dev.jar?rev=817119&view=auto
==============================================================================
Binary file - no diff available.
Propchange: hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-0.22.0-dev.jar
------------------------------------------------------------------------------
svn:mime-type = application/octet-stream
Added: hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-test-0.22.0-dev.jar
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-test-0.22.0-dev.jar?rev=817119&view=auto
==============================================================================
Binary file - no diff available.
Propchange: hadoop/hdfs/branches/HDFS-265/lib/hadoop-core-test-0.22.0-dev.jar
------------------------------------------------------------------------------
svn:mime-type = application/octet-stream
Added: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.22.0-dev.jar
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.22.0-dev.jar?rev=817119&view=auto
==============================================================================
Binary file - no diff available.
Propchange: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-0.22.0-dev.jar
------------------------------------------------------------------------------
svn:mime-type = application/octet-stream
Added: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.22.0-dev.jar
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.22.0-dev.jar?rev=817119&view=auto
==============================================================================
Binary file - no diff available.
Propchange: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-examples-0.22.0-dev.jar
------------------------------------------------------------------------------
svn:mime-type = application/octet-stream
Added: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.22.0-dev.jar
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.22.0-dev.jar?rev=817119&view=auto
==============================================================================
Binary file - no diff available.
Propchange: hadoop/hdfs/branches/HDFS-265/lib/hadoop-mapred-test-0.22.0-dev.jar
------------------------------------------------------------------------------
svn:mime-type = application/octet-stream
Added: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/README
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/README?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/README (added)
+++ hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/README Sun Sep 20 23:02:16 2009
@@ -0,0 +1,25 @@
+This contribution consists of two components designed to make it easier to find information about lost or corrupt blocks.
+
+The first is a map reduce designed to search for one or more block ids in a set of log files. It exists in org.apache.hadoop.block_forensics.BlockSearch. Building this contribution generates a jar file that can be executed using:
+
+ bin/hadoop jar [jar location] [hdfs input path] [hdfs output dir] [comma delimited list of block ids]
+
+ For example, the command:
+ bin/hadoop jar /foo/bar/hadoop-0.1-block_forensics.jar /input/* /ouptut 2343,45245,75823
+ ... searches for any of blocks 2343, 45245, or 75823 in any of the files
+ contained in the /input/ directory.
+
+
+ The output will be any line containing one of the provided block ids. While this tool is designed to be used with block ids, it can also be used for general text searching.
+
+The second component is a standalone java program that will repeatedly query the namenode at a given interval looking for corrupt replicas. If it finds a corrupt replica, it will launch the above map reduce job. The syntax is:
+
+ java BlockForensics http://[namenode]:[port]/corrupt_replicas_xml.jsp [sleep time between namenode query for corrupt blocks (in milliseconds)] [mapred jar location] [hdfs input path]
+
+ For example, the command:
+ java BlockForensics http://localhost:50070/corrupt_replicas_xml.jsp 30000
+ /foo/bar/hadoop-0.1-block_forensics.jar /input/*
+ ... queries the namenode at localhost:50070 for corrupt replicas every 30
+ seconds and runs /foo/bar/hadoop-0.1-block_forensics.jar if any are found.
+
+The map reduce job jar and the BlockForensics class can be found in your build/contrib/block_forensics and build/contrib/block_forensics/classes directories, respectively.
Added: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/build.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/build.xml?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/build.xml (added)
+++ hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/build.xml Sun Sep 20 23:02:16 2009
@@ -0,0 +1,66 @@
+<?xml version="1.0"?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--
+Before you can run these subtargets directly, you need
+to call at top-level: ant deploy-contrib compile-core-test
+-->
+<project name="block_forensics" default="jar">
+ <property name="version" value="0.1"/>
+ <import file="../build-contrib.xml"/>
+
+ <!-- create the list of files to add to the classpath -->
+ <fileset dir="${hadoop.root}/lib" id="class.path">
+ <include name="**/*.jar" />
+ <exclude name="**/excluded/" />
+ </fileset>
+
+ <!-- Override jar target to specify main class -->
+ <target name="jar" depends="compile">
+ <jar
+ jarfile="${build.dir}/hadoop-${version}-${name}.jar"
+ basedir="${build.classes}"
+ >
+ <manifest>
+ <attribute name="Main-Class" value="org.apache.hadoop.blockforensics.BlockSearch"/>
+ </manifest>
+ </jar>
+
+ <javac srcdir="client" destdir="${build.classes}"/>
+
+ </target>
+
+ <!-- Run only pure-Java unit tests. superdottest -->
+ <target name="test">
+ <antcall target="hadoopbuildcontrib.test">
+ </antcall>
+ </target>
+
+ <!-- Run all unit tests
+ This is not called as part of the nightly build
+ because it will only run on platforms that have standard
+ Unix utilities available.
+ -->
+ <target name="test-unix">
+ <antcall target="hadoopbuildcontrib.test">
+ </antcall>
+ </target>
+
+
+</project>
Propchange: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/build.xml
------------------------------------------------------------------------------
svn:mime-type = text/plain
Added: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/client/BlockForensics.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/client/BlockForensics.java?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/client/BlockForensics.java (added)
+++ hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/client/BlockForensics.java Sun Sep 20 23:02:16 2009
@@ -0,0 +1,186 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.lang.Runtime;
+import java.net.URL;
+import java.net.URLConnection;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Random;
+import java.util.Set;
+import java.util.StringTokenizer;
+import java.util.TreeSet;
+import javax.xml.parsers.DocumentBuilder;
+import javax.xml.parsers.DocumentBuilderFactory;
+import javax.xml.parsers.ParserConfigurationException;
+import org.w3c.dom.Document;
+import org.w3c.dom.NodeList;
+import org.xml.sax.SAXException;
+
+/**
+ * This class repeatedly queries a namenode looking for corrupt replicas. If
+ * any are found a provided hadoop job is launched and the output printed
+ * to stdout.
+ *
+ * The syntax is:
+ *
+ * java BlockForensics http://[namenode]:[port]/corrupt_replicas_xml.jsp
+ * [sleep time between namenode query for corrupt blocks
+ * (in seconds)] [mapred jar location] [hdfs input path]
+ *
+ * All arguments are required.
+ */
+public class BlockForensics {
+
+ public static String join(List<?> l, String sep) {
+ StringBuilder sb = new StringBuilder();
+ Iterator it = l.iterator();
+
+ while(it.hasNext()){
+ sb.append(it.next());
+ if (it.hasNext()) {
+ sb.append(sep);
+ }
+ }
+
+ return sb.toString();
+ }
+
+
+ // runs hadoop command and prints output to stdout
+ public static void runHadoopCmd(String ... args)
+ throws IOException {
+ String hadoop_home = System.getenv("HADOOP_HOME");
+
+ List<String> l = new LinkedList<String>();
+ l.add("bin/hadoop");
+ l.addAll(Arrays.asList(args));
+
+ ProcessBuilder pb = new ProcessBuilder(l);
+
+ if (hadoop_home != null) {
+ pb.directory(new File(hadoop_home));
+ }
+
+ pb.redirectErrorStream(true);
+
+ Process p = pb.start();
+
+ BufferedReader br = new BufferedReader(
+ new InputStreamReader(p.getInputStream()));
+ String line;
+
+ while ((line = br.readLine()) != null) {
+ System.out.println(line);
+ }
+
+
+ }
+
+ public static void main(String[] args)
+ throws SAXException, ParserConfigurationException,
+ InterruptedException, IOException {
+
+ if (System.getenv("HADOOP_HOME") == null) {
+ System.err.println("The environmental variable HADOOP_HOME is undefined");
+ System.exit(1);
+ }
+
+
+ if (args.length < 4) {
+ System.out.println("Usage: java BlockForensics [http://namenode:port/"
+ + "corrupt_replicas_xml.jsp] [sleep time between "
+ + "requests (in milliseconds)] [mapred jar location] "
+ + "[hdfs input path]");
+ return;
+ }
+
+ int sleepTime = 30000;
+
+ try {
+ sleepTime = Integer.parseInt(args[1]);
+ } catch (NumberFormatException e) {
+ System.out.println("The sleep time entered is invalid, "
+ + "using default value: "+sleepTime+"ms");
+ }
+
+ Set<Long> blockIds = new TreeSet<Long>();
+
+ while (true) {
+ InputStream xml = new URL(args[0]).openConnection().getInputStream();
+
+ DocumentBuilderFactory fact = DocumentBuilderFactory.newInstance();
+ DocumentBuilder builder = fact.newDocumentBuilder();
+ Document doc = builder.parse(xml);
+
+ NodeList corruptReplicaNodes = doc.getElementsByTagName("block_id");
+
+ List<Long> searchBlockIds = new LinkedList<Long>();
+ for(int i=0; i<corruptReplicaNodes.getLength(); i++) {
+ Long blockId = new Long(corruptReplicaNodes.item(i)
+ .getFirstChild()
+ .getNodeValue());
+ if (!blockIds.contains(blockId)) {
+ blockIds.add(blockId);
+ searchBlockIds.add(blockId);
+ }
+ }
+
+ if (searchBlockIds.size() > 0) {
+ String blockIdsStr = BlockForensics.join(searchBlockIds, ",");
+ System.out.println("\nSearching for: " + blockIdsStr);
+ String tmpDir =
+ new String("/tmp-block-forensics-" +
+ Integer.toString(new Random().nextInt(Integer.MAX_VALUE)));
+
+ System.out.println("Using temporary dir: "+tmpDir);
+
+ // delete tmp dir
+ BlockForensics.runHadoopCmd("fs", "-rmr", tmpDir);
+
+ // launch mapred job
+ BlockForensics.runHadoopCmd("jar",
+ args[2], // jar location
+ args[3], // input dir
+ tmpDir, // output dir
+ blockIdsStr// comma delimited list of blocks
+ );
+ // cat output
+ BlockForensics.runHadoopCmd("fs", "-cat", tmpDir+"/part*");
+
+ // delete temp dir
+ BlockForensics.runHadoopCmd("fs", "-rmr", tmpDir);
+
+ int sleepSecs = (int)(sleepTime/1000.);
+ System.out.print("Sleeping for "+sleepSecs
+ + " second"+(sleepSecs == 1?"":"s")+".");
+ }
+
+ System.out.print(".");
+ Thread.sleep(sleepTime);
+
+ }
+ }
+}
Propchange: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/client/BlockForensics.java
------------------------------------------------------------------------------
svn:mime-type = text/plain
Added: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy.xml?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy.xml (added)
+++ hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy.xml Sun Sep 20 23:02:16 2009
@@ -0,0 +1,44 @@
+<?xml version="1.0" ?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<ivy-module version="1.0">
+ <info organisation="org.apache.hadoop" module="${ant.project.name}">
+ <license name="Apache 2.0"/>
+ <ivyauthor name="Apache Hadoop Team" url="http://hadoop.apache.org"/>
+ <description>
+ Apache Hadoop
+ </description>
+ </info>
+ <configurations defaultconfmapping="default">
+ <!--these match the Maven configurations-->
+ <conf name="default" extends="master,runtime"/>
+ <conf name="master" description="contains the artifact but no dependencies"/>
+ <conf name="runtime" description="runtime but not the artifact" />
+
+ <conf name="common" visibility="private"
+ extends="runtime"
+ description="artifacts needed to compile/test the application"/>
+ <conf name="test" visibility="private" extends="runtime"/>
+ </configurations>
+
+ <publications>
+ <!--get the artifact from our module name-->
+ <artifact conf="master"/>
+ </publications>
+</ivy-module>
Propchange: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy.xml
------------------------------------------------------------------------------
svn:mime-type = text/plain
Added: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy/libraries.properties
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy/libraries.properties?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy/libraries.properties (added)
+++ hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy/libraries.properties Sun Sep 20 23:02:16 2009
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#This properties file lists the versions of the various artifacts used by thrifts.
+#It drives ivy and the generation of a maven POM
+
+#Please list the dependencies name with version if they are different from the ones
+#listed in the global libraries.properties file (in alphabetical order)
Propchange: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/ivy/libraries.properties
------------------------------------------------------------------------------
svn:mime-type = text/plain
Added: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/block_forensics/BlockSearch.java
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/block_forensics/BlockSearch.java?rev=817119&view=auto
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/block_forensics/BlockSearch.java (added)
+++ hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/block_forensics/BlockSearch.java Sun Sep 20 23:02:16 2009
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.blockforensics;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.StringTokenizer;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+
+/**
+ * BlockSearch is a mapred job that's designed to search input for appearances
+ * of strings.
+ *
+ * The syntax is:
+ *
+ * bin/hadoop jar [jar location] [hdfs input path] [hdfs output dir]
+ [comma delimited list of block ids]
+ *
+ * All arguments are required.
+ *
+ * This tool is designed to be used to search for one or more block ids in log
+ * files but can be used for general text search, assuming the search strings
+ * don't contain tokens. It assumes only one search string will appear per line.
+ */
+public class BlockSearch extends Configured implements Tool {
+ public static class Map extends Mapper<LongWritable, Text, Text, Text> {
+ private Text blockIdText = new Text();
+ private Text valText = new Text();
+ private List<String> blockIds = null;
+
+ protected void setup(Context context)
+ throws IOException, InterruptedException {
+ Configuration conf = context.getConfiguration();
+ StringTokenizer st = new StringTokenizer(conf.get("blockIds"), ",");
+ blockIds = new LinkedList<String>();
+ while (st.hasMoreTokens()) {
+ String blockId = st.nextToken();
+ blockIds.add(blockId);
+ }
+ }
+
+
+ public void map(LongWritable key, Text value, Context context)
+ throws IOException, InterruptedException {
+ if (blockIds == null) {
+ System.err.println("Error: No block ids specified");
+ } else {
+ String valStr = value.toString();
+
+ for(String blockId: blockIds) {
+ if (valStr.indexOf(blockId) != -1) {
+ blockIdText.set(blockId);
+ valText.set(valStr);
+ context.write(blockIdText, valText);
+ break; // assume only one block id appears per line
+ }
+ }
+ }
+
+ }
+
+ }
+
+
+ public static class Reduce extends Reducer<Text, Text, Text, Text> {
+ private Text val = new Text();
+ public void reduce(Text key, Iterator<Text> values, Context context)
+ throws IOException, InterruptedException {
+ while (values.hasNext()) {
+ context.write(key, values.next());
+ }
+ }
+ }
+
+ public int run(String[] args) throws Exception {
+ if (args.length < 3) {
+ System.out.println("BlockSearch <inLogs> <outDir> <comma delimited list of blocks>");
+ ToolRunner.printGenericCommandUsage(System.out);
+ return 2;
+ }
+
+ Configuration conf = getConf();
+ conf.set("blockIds", args[2]);
+
+ Job job = new Job(conf);
+
+ job.setCombinerClass(Reduce.class);
+ job.setJarByClass(BlockSearch.class);
+ job.setJobName("BlockSearch");
+ job.setMapperClass(Map.class);
+ job.setOutputKeyClass(Text.class);
+ job.setOutputValueClass(Text.class);
+ job.setReducerClass(Reduce.class);
+
+ FileInputFormat.setInputPaths(job, new Path(args[0]));
+ FileOutputFormat.setOutputPath(job, new Path(args[1]));
+
+ return job.waitForCompletion(true) ? 0 : 1;
+ }
+
+ public static void main(String[] args) throws Exception {
+ int res = ToolRunner.run(new Configuration(), new BlockSearch(), args);
+ System.exit(res);
+ }
+}
Propchange: hadoop/hdfs/branches/HDFS-265/src/contrib/block_forensics/src/java/org/apache/hadoop/block_forensics/BlockSearch.java
------------------------------------------------------------------------------
svn:mime-type = text/plain
Modified: hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/SLG_user_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/SLG_user_guide.xml?rev=817119&r1=817118&r2=817119&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/SLG_user_guide.xml (original)
+++ hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/SLG_user_guide.xml Sun Sep 20 23:02:16 2009
@@ -18,12 +18,12 @@
<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
<document>
<header>
- <title> HDFS Synthetic Load Generator Guide </title>
+ <title>Synthetic Load Generator Guide </title>
</header>
<body>
- <section>
- <title> Description </title>
- <p>
+ <section>
+ <title>Overview</title>
+ <p>
The synthetic load generator (SLG) is a tool for testing NameNode behavior
under different client loads. The user can generate different mixes
of read, write, and list requests by specifying the probabilities of
@@ -33,91 +33,121 @@
monitor the running of the NameNode. When a load generator exits, it
prints some NameNode statistics like the average execution time of each
kind of operation and the NameNode throughput.
- </p>
- </section>
- <section>
- <title> Synopsis </title>
- <p>
- <code>java LoadGenerator [options]</code><br/>
- </p>
- <p>
- Options include:<br/>
- <code> -readProbability <read probability></code><br/>
- <code> the probability of the read operation;
- default is 0.3333. </code><br/>
- <code> -writeProbability <write probability></code><br/>
- <code> the probability of the write
- operations; default is 0.3333.</code><br/>
- <code> -root <test space root></code><br/>
- <code> the root of the test space;
- default is /testLoadSpace.</code><br/>
- <code> -maxDelayBetweenOps
- <maxDelayBetweenOpsInMillis></code><br/>
- <code> the maximum delay between two consecutive
- operations in a thread; default is 0 indicating no delay.
- </code><br/>
- <code> -numOfThreads <numOfThreads></code><br/>
- <code> the number of threads to spawn;
- default is 200.</code><br/>
- <code> -elapsedTime <elapsedTimeInSecs></code><br/>
- <code> the number of seconds that the program
- will run; A value of zero indicates that the program runs
- forever. The default value is 0.</code><br/>
- <code> -startTime <startTimeInMillis></code><br/>
- <code> the time that all worker threads
+ </p>
+ </section>
+
+ <section>
+ <title> Synopsis </title>
+ <p>
+ The synopsis of the command is:
+ </p>
+ <source>java LoadGenerator [options]</source>
+ <p> Options include:</p>
+
+ <ul>
+ <li>
+ <code>-readProbability <read probability></code><br/>
+ The probability of the read operation; default is 0.3333.
+ </li>
+
+ <li>
+ <code>-writeProbability <write probability></code><br/>
+ The probability of the write operations; default is 0.3333.
+ </li>
+
+ <li>
+ <code>-root <test space root></code><br/>
+ The root of the test space; default is /testLoadSpace.
+ </li>
+
+ <li>
+ <code>-maxDelayBetweenOps <maxDelayBetweenOpsInMillis></code><br/>
+ The maximum delay between two consecutive operations in a thread; default is 0 indicating no delay.
+ </li>
+
+ <li>
+ <code>-numOfThreads <numOfThreads></code><br/>
+ The number of threads to spawn; default is 200.
+ </li>
+
+ <li>
+ <code>-elapsedTime <elapsedTimeInSecs></code><br/>
+ The number of seconds that the program
+ will run; A value of zero indicates that the program runs
+ forever. The default value is 0.
+ </li>
+
+ <li>
+ <code>-startTime <startTimeInMillis></code><br/>
+ The time that all worker threads
start to run. By default it is 10 seconds after the main
program starts running.This creates a barrier if more than
one load generator is running.
- </code><br/>
- <code> -seed <seed></code><br/>
- <code> the random generator seed for repeating
+ </li>
+
+ <li>
+ <code>-seed <seed></code><br/>
+ The random generator seed for repeating
requests to NameNode when running with a single thread;
- default is the current time.</code><br/>
- </p>
- <p>
+ default is the current time.
+ </li>
+
+ </ul>
+
+ <p>
After command line argument parsing, the load generator traverses
the test space and builds a table of all directories and another table
of all files in the test space. It then waits until the start time to
- spawn the number of worker threads as specified by the user. Each
- thread sends a stream of requests to NameNode. At each iteration,
+ spawn the number of worker threads as specified by the user.
+
+ Each thread sends a stream of requests to NameNode. At each iteration,
it first decides if it is going to read a file, create a file, or
list a directory following the read and write probabilities specified
by the user. The listing probability is equal to
<em>1-read probability-write probability</em>. When reading,
it randomly picks a file in the test space and reads the entire file.
When writing, it randomly picks a directory in the test space and
- creates a file there. To avoid two threads with the same load
- generator or from two different load generators create the same
+ creates a file there.
+ </p>
+ <p>
+ To avoid two threads with the same load
+ generator or from two different load generators creating the same
file, the file name consists of the current machine's host name
and the thread id. The length of the file follows Gaussian
distribution with an average size of 2 blocks and the standard
- deviation of 1. The new file is filled with byte 'a'. To avoid
- the test space to grow indefinitely, the file is deleted immediately
- after the file creation completes. While listing, it randomly
- picks a directory in the test space and lists its content.
+ deviation of 1. The new file is filled with byte 'a'. To avoid the test
+ space growing indefinitely, the file is deleted immediately
+ after the file creation completes. While listing, it randomly picks
+ a directory in the test space and lists its content.
+ </p>
+ <p>
After an operation completes, the thread pauses for a random
amount of time in the range of [0, maxDelayBetweenOps] if the
specified maximum delay is not zero. All threads are stopped when
the specified elapsed time is passed. Before exiting, the program
prints the average execution for each kind of NameNode operations,
and the number of requests served by the NameNode per second.
- </p>
- </section>
- <section>
- <title> Test Space Population </title>
- <p>
- The user needs to populate a test space before she runs a
+ </p>
+
+ </section>
+
+ <section>
+ <title> Test Space Population </title>
+ <p>
+ The user needs to populate a test space before running a
load generator. The structure generator generates a random
test space structure and the data generator creates the files
and directories of the test space in Hadoop distributed file system.
- </p>
- <section>
- <title> Structure Generator </title>
- <p>
+ </p>
+
+ <section>
+ <title> Structure Generator </title>
+ <p>
This tool generates a random namespace structure with the
following constraints:
- </p>
- <ol>
+ </p>
+
+ <ol>
<li>The number of subdirectories that a directory can have is
a random number in [minWidth, maxWidth].</li>
<li>The maximum depth of each subdirectory is a random number
@@ -125,69 +155,83 @@
<li>Files are randomly placed in leaf directories. The size of
each file follows Gaussian distribution with an average size
of 1 block and a standard deviation of 1.</li>
- </ol>
- <p>
+ </ol>
+ <p>
The generated namespace structure is described by two files in
the output directory. Each line of the first file contains the
full name of a leaf directory. Each line of the second file
contains the full name of a file and its size, separated by a blank.
- </p>
- <p>
- The synopsis of the command is
- </p>
- <p>
- <code>java StructureGenerator [options]</code>
- </p>
- <p>
- Options include:<br/>
- <code> -maxDepth <maxDepth></code><br/>
- <code> maximum depth of the directory tree;
- default is 5.</code><br/>
- <code> -minWidth <minWidth></code><br/>
- <code> minimum number of subdirectories per
- directories; default is 1.</code><br/>
- <code> -maxWidth <maxWidth></code><br/>
- <code> maximum number of subdirectories per
- directories; default is 5.</code><br/>
- <code> -numOfFiles <#OfFiles></code><br/>
- <code> the total number of files in the test
- space; default is 10.</code><br/>
- <code> -avgFileSize <avgFileSizeInBlocks></code><br/>
- <code> average size of blocks; default is 1.
- </code><br/>
- <code> -outDir <outDir></code><br/>
- <code> output directory; default is the
- current directory. </code><br/>
- <code> -seed <seed></code><br/>
- <code> random number generator seed;
- default is the current time.</code><br/>
- </p>
- </section>
- <section>
- <title> Test Space Generator </title>
- <p>
+ </p>
+ <p>
+ The synopsis of the command is:
+ </p>
+ <source>java StructureGenerator [options]</source>
+
+ <p>Options include:</p>
+ <ul>
+ <li>
+ <code>-maxDepth <maxDepth></code><br/>
+ Maximum depth of the directory tree; default is 5.
+ </li>
+
+ <li>
+ <code>-minWidth <minWidth></code><br/>
+ Minimum number of subdirectories per directories; default is 1.
+ </li>
+
+ <li>
+ <code>-maxWidth <maxWidth></code><br/>
+ Maximum number of subdirectories per directories; default is 5.
+ </li>
+
+ <li>
+ <code>-numOfFiles <#OfFiles></code><br/>
+ The total number of files in the test space; default is 10.
+ </li>
+
+ <li>
+ <code>-avgFileSize <avgFileSizeInBlocks></code><br/>
+ Average size of blocks; default is 1.
+ </li>
+
+ <li>
+ <code>-outDir <outDir></code><br/>
+ Output directory; default is the current directory.
+ </li>
+
+ <li>
+ <code>-seed <seed></code><br/>
+ Random number generator seed; default is the current time.
+ </li>
+ </ul>
+ </section>
+
+ <section>
+ <title>Data Generator </title>
+ <p>
This tool reads the directory structure and file structure from
the input directory and creates the namespace in Hadoop distributed
file system. All files are filled with byte 'a'.
- </p>
- <p>
- The synopsis of the command is
- </p>
- <p>
- <code>java DataGenerator [options]</code>
- </p>
- <p>
- Options include:<br/>
- <code> -inDir <inDir></code><br/>
- <code> input directory name where directory/file
- structures are stored; default is the current directory.
- </code><br/>
- <code> -root <test space root></code><br/>
- <code> the name of the root directory which the
- new namespace is going to be placed under;
- default is "/testLoadSpace".</code><br/>
- </p>
- </section>
- </section>
+ </p>
+ <p>
+ The synopsis of the command is:
+ </p>
+ <source>java DataGenerator [options]</source>
+ <p>Options include:</p>
+ <ul>
+ <li>
+ <code>-inDir <inDir></code><br/>
+ Input directory name where directory/file
+ structures are stored; default is the current directory.
+ </li>
+ <li>
+ <code>-root <test space root></code><br/>
+ The name of the root directory which the
+ new namespace is going to be placed under;
+ default is "/testLoadSpace".
+ </li>
+ </ul>
+ </section>
+ </section>
</body>
</document>
Modified: hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/faultinject_framework.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/faultinject_framework.xml?rev=817119&r1=817118&r2=817119&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/faultinject_framework.xml (original)
+++ hadoop/hdfs/branches/HDFS-265/src/docs/src/documentation/content/xdocs/faultinject_framework.xml Sun Sep 20 23:02:16 2009
@@ -21,41 +21,40 @@
<document>
<header>
- <title>Fault injection Framework and Development Guide</title>
+ <title>Fault Injection Framework and Development Guide</title>
</header>
<body>
<section>
<title>Introduction</title>
- <p>The following is a brief help for Hadoops' Fault Injection (FI)
- Framework and Developer's Guide for those who will be developing
- their own faults (aspects).
+ <p>This guide provides an overview of the Hadoop Fault Injection (FI) framework for those
+ who will be developing their own faults (aspects).
</p>
- <p>An idea of Fault Injection (FI) is fairly simple: it is an
+ <p>The idea of fault injection is fairly simple: it is an
infusion of errors and exceptions into an application's logic to
achieve a higher coverage and fault tolerance of the system.
- Different implementations of this idea are available at this day.
+ Different implementations of this idea are available today.
Hadoop's FI framework is built on top of Aspect Oriented Paradigm
(AOP) implemented by AspectJ toolkit.
</p>
</section>
<section>
<title>Assumptions</title>
- <p>The current implementation of the framework assumes that the faults it
- will be emulating are of non-deterministic nature. i.e. the moment
- of a fault's happening isn't known in advance and is a coin-flip
- based.
+ <p>The current implementation of the FI framework assumes that the faults it
+ will be emulating are of non-deterministic nature. That is, the moment
+ of a fault's happening isn't known in advance and is a coin-flip based.
</p>
</section>
+
<section>
<title>Architecture of the Fault Injection Framework</title>
<figure src="images/FI-framework.gif" alt="Components layout" />
+
<section>
- <title>Configuration management</title>
- <p>This piece of the framework allow to
- set expectations for faults to happen. The settings could be applied
- either statically (in advance) or in a runtime. There's two ways to
- configure desired level of faults in the framework:
+ <title>Configuration Management</title>
+ <p>This piece of the FI framework allows you to set expectations for faults to happen.
+ The settings can be applied either statically (in advance) or in runtime.
+ The desired level of faults in the framework can be configured two ways:
</p>
<ul>
<li>
@@ -71,31 +70,31 @@
</li>
</ul>
</section>
+
<section>
- <title>Probability model</title>
- <p>This fundamentally is a coin flipper. The methods of this class are
+ <title>Probability Model</title>
+ <p>This is fundamentally a coin flipper. The methods of this class are
getting a random number between 0.0
- and 1.0 and then checking if new number has happened to be in the
- range of
- 0.0 and a configured level for the fault in question. If that
- condition
- is true then the fault will occur.
+ and 1.0 and then checking if a new number has happened in the
+ range of 0.0 and a configured level for the fault in question. If that
+ condition is true then the fault will occur.
</p>
- <p>Thus, to guarantee a happening of a fault one needs to set an
+ <p>Thus, to guarantee the happening of a fault one needs to set an
appropriate level to 1.0.
To completely prevent a fault from happening its probability level
- has to be set to 0.0
+ has to be set to 0.0.
</p>
- <p><strong>Nota bene</strong>: default probability level is set to 0
+ <p><strong>Note</strong>: The default probability level is set to 0
(zero) unless the level is changed explicitly through the
configuration file or in the runtime. The name of the default
level's configuration parameter is
<code>fi.*</code>
</p>
</section>
+
<section>
- <title>Fault injection mechanism: AOP and AspectJ</title>
- <p>In the foundation of Hadoop's fault injection framework lays
+ <title>Fault Injection Mechanism: AOP and AspectJ</title>
+ <p>The foundation of Hadoop's FI framework includes a
cross-cutting concept implemented by AspectJ. The following basic
terms are important to remember:
</p>
@@ -122,8 +121,9 @@
</li>
</ul>
</section>
+
<section>
- <title>Existing join points</title>
+ <title>Existing Join Points</title>
<p>
The following readily available join points are provided by AspectJ:
</p>
@@ -154,7 +154,7 @@
</section>
</section>
<section>
- <title>Aspects examples</title>
+ <title>Aspect Example</title>
<source>
package org.apache.hadoop.hdfs.server.datanode;
@@ -191,17 +191,22 @@
}
}
}
- </source>
- <p>
- The aspect has two main parts: the join point
+</source>
+
+ <p>The aspect has two main parts: </p>
+ <ul>
+ <li>The join point
<code>pointcut callReceivepacket()</code>
which servers as an identification mark of a specific point (in control
- and/or data flow) in the life of an application. A call to the advice -
+ and/or data flow) in the life of an application. </li>
+
+ <li> A call to the advice -
<code>before () throws IOException : callReceivepacket()</code>
- - will be
- <a href="#Putting+it+all+together">injected</a>
- before that specific spot of the application's code.
- </p>
+ - will be injected (see
+ <a href="#Putting+it+all+together">Putting It All Together</a>)
+ before that specific spot of the application's code.</li>
+ </ul>
+
<p>The pointcut identifies an invocation of class'
<code>java.io.OutputStream write()</code>
@@ -210,8 +215,8 @@
take place within the body of method
<code>receivepacket()</code>
from class<code>BlockReceiver</code>.
- The method can have any parameters and any return type. possible
- invocations of
+ The method can have any parameters and any return type.
+ Possible invocations of
<code>write()</code>
method happening anywhere within the aspect
<code>BlockReceiverAspects</code>
@@ -222,24 +227,22 @@
class. In such a case the names of the faults have to be different
if a developer wants to trigger them separately.
</p>
- <p><strong>Note 2</strong>: After
- <a href="#Putting+it+all+together">injection step</a>
+ <p><strong>Note 2</strong>: After the injection step (see
+ <a href="#Putting+it+all+together">Putting It All Together</a>)
you can verify that the faults were properly injected by
- searching for
- <code>ajc</code>
- keywords in a disassembled class file.
+ searching for <code>ajc</code> keywords in a disassembled class file.
</p>
</section>
<section>
- <title>Fault naming convention & namespaces</title>
- <p>For the sake of unified naming
+ <title>Fault Naming Convention and Namespaces</title>
+ <p>For the sake of a unified naming
convention the following two types of names are recommended for a
new aspects development:</p>
<ul>
- <li>Activity specific notation (as
- when we don't care about a particular location of a fault's
+ <li>Activity specific notation
+ (when we don't care about a particular location of a fault's
happening). In this case the name of the fault is rather abstract:
<code>fi.hdfs.DiskError</code>
</li>
@@ -251,14 +254,11 @@
</section>
<section>
- <title>Development tools</title>
+ <title>Development Tools</title>
<ul>
- <li>Eclipse
- <a href="http://www.eclipse.org/ajdt/">AspectJ
- Development Toolkit
- </a>
- might help you in the aspects' development
- process.
+ <li>The Eclipse
+ <a href="http://www.eclipse.org/ajdt/">AspectJ Development Toolkit</a>
+ may help you when developing aspects
</li>
<li>IntelliJ IDEA provides AspectJ weaver and Spring-AOP plugins
</li>
@@ -266,60 +266,67 @@
</section>
<section>
- <title>Putting it all together</title>
- <p>Faults (or aspects) have to injected (or woven) together before
- they can be used. Here's a step-by-step instruction how this can be
- done.</p>
- <p>Weaving aspects in place:</p>
- <source>
+ <title>Putting It All Together</title>
+ <p>Faults (aspects) have to injected (or woven) together before
+ they can be used. Follow these instructions:</p>
+
+ <ul>
+ <li>To weave aspects in place use:
+<source>
% ant injectfaults
- </source>
- <p>If you
- misidentified the join point of your aspect then you'll see a
- warning similar to this one below when 'injectfaults' target is
- completed:</p>
- <source>
+</source>
+ </li>
+
+ <li>If you
+ misidentified the join point of your aspect you will see a
+ warning (similar to the one shown here) when 'injectfaults' target is
+ completed:
+<source>
[iajc] warning at
src/test/aop/org/apache/hadoop/hdfs/server/datanode/ \
BlockReceiverAspects.aj:44::0
advice defined in org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects
has not been applied [Xlint:adviceDidNotMatch]
- </source>
- <p>It isn't an error, so the build will report the successful result.
-
- To prepare dev.jar file with all your faults weaved in
- place run (HDFS-475 pending)</p>
- <source>
+</source>
+ </li>
+
+ <li>It isn't an error, so the build will report the successful result. <br />
+ To prepare dev.jar file with all your faults weaved in place (HDFS-475 pending) use:
+<source>
% ant jar-fault-inject
- </source>
+</source>
+ </li>
- <p>Test jars can be created by</p>
- <source>
+ <li>To create test jars use:
+<source>
% ant jar-test-fault-inject
- </source>
+</source>
+ </li>
- <p>To run HDFS tests with faults injected:</p>
- <source>
+ <li>To run HDFS tests with faults injected use:
+<source>
% ant run-test-hdfs-fault-inject
- </source>
+</source>
+ </li>
+ </ul>
+
<section>
- <title>How to use fault injection framework</title>
- <p>Faults could be triggered by the following two meanings:
+ <title>How to Use the Fault Injection Framework</title>
+ <p>Faults can be triggered as follows:
</p>
<ul>
- <li>In the runtime as:
- <source>
+ <li>During runtime:
+<source>
% ant run-test-hdfs -Dfi.hdfs.datanode.BlockReceiver=0.12
- </source>
- To set a certain level, e.g. 25%, of all injected faults one can run
+</source>
+ To set a certain level, for example 25%, of all injected faults use:
<br/>
- <source>
+<source>
% ant run-test-hdfs-fault-inject -Dfi.*=0.25
- </source>
+</source>
</li>
- <li>or from a program as follows:
- </li>
- </ul>
+ <li>From a program:
+
<source>
package org.apache.hadoop.fs;
@@ -354,23 +361,23 @@
//Cleaning up test test environment
}
}
- </source>
+</source>
+ </li>
+ </ul>
+
<p>
- as you can see above these two methods do the same thing. They are
- setting the probability level of
- <code>hdfs.datanode.BlockReceiver</code>
- at 12%.
- The difference, however, is that the program provides more
- flexibility and allows to turn a fault off when a test doesn't need
- it anymore.
+ As you can see above these two methods do the same thing. They are
+ setting the probability level of <code>hdfs.datanode.BlockReceiver</code>
+ at 12%. The difference, however, is that the program provides more
+ flexibility and allows you to turn a fault off when a test no longer needs it.
</p>
</section>
</section>
<section>
- <title>Additional information and contacts</title>
- <p>This two sources of information seem to be particularly
- interesting and worth further reading:
+ <title>Additional Information and Contacts</title>
+ <p>These two sources of information are particularly
+ interesting and worth reading:
</p>
<ul>
<li>
@@ -381,9 +388,8 @@
<li>AspectJ Cookbook (ISBN-13: 978-0-596-00654-9)
</li>
</ul>
- <p>Should you have any farther comments or questions to the author
- check
- <a href="http://issues.apache.org/jira/browse/HDFS-435">HDFS-435</a>
+ <p>If you have additional comments or questions for the author check
+ <a href="http://issues.apache.org/jira/browse/HDFS-435">HDFS-435</a>.
</p>
</section>
</body>