You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Thanh Do <th...@cs.wisc.edu> on 2009/10/08 06:25:19 UTC

How to run Fault injection in HDFS

Hi everyone,

Could any body so me how to run the fault injection framework mentioned in
the following links?:

http://issues.apache.org/jira/browse/HDFS-435

and

https://issues.apache.org/jira/browse/HDFS-436

Thanks,
Thanh

Re: How to run Fault injection in HDFS

Posted by Thanh Do <th...@cs.wisc.edu>.
Thanks for your very useful advice. I am able to play with fault injection
now.

On Thu, Oct 8, 2009 at 11:41 AM, Konstantin Boudnik <co...@yahoo-inc.com>wrote:

> Thanks for looking into fault injection - it's very interesting and useful
> technique based on AspectJ.
>
> Currently, it is fully integrated into HDFS only. There's a JIRA
> (HADOOP-6204) which tracks the same effort for Common and then all Hadoop's
> components will have injection (as well as fault injection) in place. This
> JIRA should be committed in the matter of a couple of weeks.
>
> For the immediate purpose you don't need to patch anything or do any
> tweaking of the code: the fault injection framework is in already and ready
> to work.
>
> For your current needs: to be able to run HDFS with instrumented code you
> need to run a special build. To do so:
>  - % ant injectfaults - similar to a 'normal' build, but does instrument
> the code with aspects located under src/test/aop/**
>  - % ant jar-fault-inject - similar to a 'normal' jar creation but
> instrumented
>  - % ant jar-test-fault-inject - similar to a 'normal' jar-test creation
> but instrumented
>
> Now, if you have the rest of sub-projects built you need to move the
> instrumented jar files on top of the 'normal' files in your installation
> directory. Please note that some renaming has to be done: injected jar files
> have '-fi' suffix in their names and normal jar files don't have such. Thus
> currently you'll have to rename those injected jars to pretend like they are
> normal, used by configured's classpath.
>
> At this point you all set: you have a production quality Hadoop with
> injected HDFS. As soon as the aforementioned JIRA is ready and committed
> we'd be able to provide Hadoop-injected version by the build's means rather
> than doing any renaming and manual intervention.
>
> Also, if you need to read more about fault injection (FI) in HDFS you can
> find FI-framework documentation in the current HDFS trunk (it isn't on the
> web yet for version 0.21 hasn't been released yet). Because building
> documentation requires some extra effort and additional software to be
> installed, you can simply download and read the PDF from this FI-framework
> JIRA
>
>
> https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf
>
> Hope it helps,
>  Cos
>
>
> On 10/8/09 8:10 AM, Thanh Do wrote:
>
>> Thank you so much, Jakob.
>>
>> Could you please explain the fault injection running procedure in details?
>>
>> My goal is running HDFS in a cluster (with a namenode and several
>> datanode), and see how fault injection techniques affect HDFS
>> behavior's. Also, I would like to define some new aspects/fault to test
>> the system.
>>
>> What I did was:
>> 1) I checked out the hadoop-common-trunk, but this package doesn't
>> contain HDFS classes. I finally noticed that FI framework is currently
>> integrated with HDFS only.
>>
>> 2) So, I checked out the hdfs-trunk. The build.xml contain injectfaults
>> target and several other related things. I was able to build those
>> targets (injectfaults, run-test-hdfs-fault-inject, etc). Up to this
>> point, I stucked because I found no scripted that help me to start-dfs,
>> stop-dfs...
>> I copied the bin folder from common/core to HDFS project folder and ran
>> the  script:
>>
>> /bin/start-dfs.sh/
>>
>> but there is exception:
>>
>> /Exception in thread
>> main"Java.lang.NoClassDefFoundError
>> : org/apache/commons/logging/LogFactory
>> /
>> I guess the reason is I ran HDFS without any common class. How I get
>> around this?
>>
>> 3) I also tried the third way, by download the hadoop release (contain
>> everything: core, hdfs, mapred), and used Eclipse to create project from
>> existing code. I was able to build this project. The bin scripts worked
>> well but I found know FI related classes. What I did was apply the patch
>> (HADOOP-6003.patch) using Eclipse patch command (Team | apply patch),
>> but I failed the patching procedure.
>>
>> In summary, I would like to run a real HDFS with fault injection. I am
>> not very familiar with ant. Could you please show me some more details,
>> so that I could get around this?
>>
>> On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan <jhoman@yahoo-inc.com
>> <ma...@yahoo-inc.com>> wrote:
>>
>>    Thanh-
>>    If you would like the run execute the tests that have been
>>    instrumented to use the fault injection framework the ant target is
>>    run-test-hdfs-fault-inject.  These were used extensively in the
>>    recent append work and there are quite a few append-related tests.
>>      Was there something more specific you were looking for?
>>
>>    Thanks,
>>    Jakob
>>    Hadoop at Yahoo!
>>
>>
>>    Thanh Do wrote:
>>
>>        Hi everyone,
>>
>>        Could any body so me how to run the fault injection framework
>>        mentioned in the following links?:
>>
>>        http://issues.apache.org/jira/browse/HDFS-435
>>
>>        and
>>
>>        https://issues.apache.org/jira/browse/HDFS-436
>>
>>        Thanks,
>>        Thanh
>>
>>
>>
>>
>>
>>
>> --
>> T
>>
>
> --
> With best regards,
>        Konstantin Boudnik (aka Cos)
>
>        Yahoo! Grid Computing
>        +1 (408) 349-4049
>
> 2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
> Attention! Streams of consciousness are disallowed
>
>


-- 
thanh

Re: Hadoop name origin?

Posted by Tim Robertson <ti...@gmail.com>.
I genuinely find it funny that this is the kind of question my bank
asks of the HDFS list.

I don't know the answer, but would think a mail to cutting [at) apache
dott org will be the best place to find out.




On Mon, Nov 23, 2009 at 4:39 PM,  <yi...@barclayscapital.com> wrote:
> Right, I got that. The question is that is this just a random name the
> son happened to have given the toy, or like Dumbo, which is an elephant
> character from a well known kids' story.
>
>
> Regards,
> Clayton
> --
> Clayton (Yiqi) Tang, Equity RTB, Global Head of App Mgmt Engineering,
> Barclays Capital
> 212-526-7493; yiqi.tang@barcap.com; 1301-6th Ave, New York, NY 10019
>
>
> -----Original Message-----
> From: David Rosenstrauch [mailto:darose@darose.net]
> Sent: Monday, November 23, 2009 10:11 AM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: Hadoop name origin?
>
> On 11/22/2009 09:58 PM, yiqi.tang@barclayscapital.com wrote:
>> I read that Hadoop is named after a toy elephant. Is Hadoop just what
>> the kid called the stuffed animal, or is Hadoop a well-known story
>> character, like Dumbo?
>
> It's named after the project founder's son's favorite stuffed animal.
>
> DR
> _______________________________________________
>
> This e-mail may contain information that is confidential, privileged or otherwise protected from disclosure. If you are not an intended recipient of this e-mail, do not duplicate or redistribute it by any means. Please delete it and any attachments and notify the sender that you have received it in error. Unless specifically indicated, this e-mail is not an offer to buy or sell or a solicitation to buy or sell any securities, investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Barclays. Any views or opinions presented are solely those of the author and do not necessarily represent those of Barclays. This e-mail is subject to terms available at the following link: www.barcap.com/emaildisclaimer. By messaging with Barclays you consent to the foregoing.  Barclays Capital is the investment banking division of Barclays Bank PLC, a company registered in England (number 1026167) with its registered office at 1 Churchill Place, London, E14 5HP.  This email may relate to or be sent from other members of the Barclays Group.
> _______________________________________________
>

Re: Hadoop name origin?

Posted by David Rosenstrauch <da...@darose.net>.
I think it's a name the kid made up.

DR

On 11/23/2009 10:39 AM, yiqi.tang@barclayscapital.com wrote:
> Right, I got that. The question is that is this just a random name the
> son happened to have given the toy, or like Dumbo, which is an elephant
> character from a well known kids' story.
> 
> 
> Regards,
> Clayton
> --
> Clayton (Yiqi) Tang, Equity RTB, Global Head of App Mgmt Engineering,
> Barclays Capital
> 212-526-7493; yiqi.tang@barcap.com; 1301-6th Ave, New York, NY 10019
> 
> 
> -----Original Message-----
> From: David Rosenstrauch [mailto:darose@darose.net] 
> Sent: Monday, November 23, 2009 10:11 AM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: Hadoop name origin?
> 
> On 11/22/2009 09:58 PM, yiqi.tang@barclayscapital.com wrote:
>> I read that Hadoop is named after a toy elephant. Is Hadoop just what 
>> the kid called the stuffed animal, or is Hadoop a well-known story 
>> character, like Dumbo?
> 
> It's named after the project founder's son's favorite stuffed animal.
> 
> DR


RE: Hadoop name origin?

Posted by yi...@barclayscapital.com.
Right, I got that. The question is that is this just a random name the
son happened to have given the toy, or like Dumbo, which is an elephant
character from a well known kids' story.


Regards,
Clayton
--
Clayton (Yiqi) Tang, Equity RTB, Global Head of App Mgmt Engineering,
Barclays Capital
212-526-7493; yiqi.tang@barcap.com; 1301-6th Ave, New York, NY 10019


-----Original Message-----
From: David Rosenstrauch [mailto:darose@darose.net] 
Sent: Monday, November 23, 2009 10:11 AM
To: hdfs-user@hadoop.apache.org
Subject: Re: Hadoop name origin?

On 11/22/2009 09:58 PM, yiqi.tang@barclayscapital.com wrote:
> I read that Hadoop is named after a toy elephant. Is Hadoop just what 
> the kid called the stuffed animal, or is Hadoop a well-known story 
> character, like Dumbo?

It's named after the project founder's son's favorite stuffed animal.

DR
_______________________________________________

This e-mail may contain information that is confidential, privileged or otherwise protected from disclosure. If you are not an intended recipient of this e-mail, do not duplicate or redistribute it by any means. Please delete it and any attachments and notify the sender that you have received it in error. Unless specifically indicated, this e-mail is not an offer to buy or sell or a solicitation to buy or sell any securities, investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Barclays. Any views or opinions presented are solely those of the author and do not necessarily represent those of Barclays. This e-mail is subject to terms available at the following link: www.barcap.com/emaildisclaimer. By messaging with Barclays you consent to the foregoing.  Barclays Capital is the investment banking division of Barclays Bank PLC, a company registered in England (number 1026167) with its registered office at 1 Churchill Place, London, E14 5HP.  This email may relate to or be sent from other members of the Barclays Group.
_______________________________________________

Re: Hadoop name origin?

Posted by David Rosenstrauch <da...@darose.net>.
On 11/22/2009 09:58 PM, yiqi.tang@barclayscapital.com wrote:
> I read that Hadoop is named after a toy elephant. Is Hadoop just what
> the kid called the stuffed animal, or is Hadoop a well-known story
> character, like Dumbo? 

It's named after the project founder's son's favorite stuffed animal.

DR

Hadoop name origin?

Posted by yi...@barclayscapital.com.
I read that Hadoop is named after a toy elephant. Is Hadoop just what
the kid called the stuffed animal, or is Hadoop a well-known story
character, like Dumbo? 
_______________________________________________

This e-mail may contain information that is confidential, privileged or otherwise protected from disclosure. If you are not an intended recipient of this e-mail, do not duplicate or redistribute it by any means. Please delete it and any attachments and notify the sender that you have received it in error. Unless specifically indicated, this e-mail is not an offer to buy or sell or a solicitation to buy or sell any securities, investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Barclays. Any views or opinions presented are solely those of the author and do not necessarily represent those of Barclays. This e-mail is subject to terms available at the following link: www.barcap.com/emaildisclaimer. By messaging with Barclays you consent to the foregoing.  Barclays Capital is the investment banking division of Barclays Bank PLC, a company registered in England (number 1026167) with its registered office at 1 Churchill Place, London, E14 5HP.  This email may relate to or be sent from other members of the Barclays Group.
_______________________________________________

Re: How to run Fault injection in HDFS

Posted by Konstantin Boudnik <co...@yahoo-inc.com>.
Oh, I see. Right, injection framework has been introduced in early 0.21 and 
never been backported to 0.20.

On 11/23/09 19:38 , Thanh Do wrote:
> The reason I changed the /build.xml/ is that /build.xml/ in the
> hadoop-common trunk release (0.20.1) does not contain /injectfaults/
> target ( I wanna use AspectJ in the hadoop release that contains both
> hdfs and mapred). I just add following two targets.
>
> <target name="compile-fault-inject" depends="compile-hdfs-classes">
> <!-- AspectJ task definition -->
> <taskdef
>
> resource="org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties">
> <classpath>
> <pathelement location="${common.ivy.lib.dir}/aspectjtools-1.6.4.jar"/>
> </classpath>
> </taskdef>
> <echo message="Start weaving aspects in place"/>
> <iajc
>            encoding="${build.encoding}"
>            srcdir="${hdfs.src.dir};${build.src}"
>            includes="org/apache/hadoop/**/*.java,
> */org/apache/hadoop/myaspect/**/*.aj/*"
>            destDir="${build.classes}"
>            debug="${javac.debug}"
>            target="${javac.version}"
>            source="${javac.version}"
>            deprecation="${javac.deprecation}">
> <classpath refid="test.classpath"/>
> </iajc>
> <echo message="Weaving of aspects is finished"/>
> </target>
>
> <target name="injectfaults" description="Instrument HDFS classes with
> faults and other AOP advices">
> <subant buildpath="${basedir}" target="compile-fault-inject">
> <property name="build.dir" value="${build.dir}"/>
> </subant>
> </target>
>
> So that, when I want to weave my aspect, I only type:
>
> /ant injectfaults/
>
>
> On Fri, Nov 20, 2009 at 3:21 PM, Konstantin Boudnik <cos@yahoo-inc.com
> <ma...@yahoo-inc.com>> wrote:
>
>     Generally the idea was to provide everything needed for injection by
>     what current build.xml is having in Common and Hdfs. Would you mind
>     to share what extra changes you've needed and why?
>
>     Cos
>
>
>     On 11/20/09 12:32 , Thanh Do wrote:
>
>         Thank you folks!
>
>         Finally, I am able (really) to run FI with HADOOP. I added some
>         aspects
>         into the source code, changed the build.xml, and that's it.
>
>         AspectJ is awesome!
>
>         Have a nice weekend!
>
>         On Fri, Nov 20, 2009 at 1:08 PM, Konstantin Boudnik
>         <cos@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:cos@yahoo-inc.com <ma...@yahoo-inc.com>>> wrote:
>
>             Hi Thanh.
>
>             hmm, it sounds like you have some issue with compilation of
>         your code.
>
>             addDeprication() has been added to Configuration in 0.21, I
>         believe.
>             And it is there no matter how do you compile your code (with
>         FI or
>             without).
>
>             Cos
>
>
>             On 11/19/09 10:12 , Thanh Do wrote:
>
>                 Sorry to dig this thread again!
>
>                 I am expecting the release of 0.21 so that I don't have to
>                 manually play
>                 around with AspectJ FI any more.
>
>                 I still have problem with running HDFS with instrumented
>         code
>                 (with aspect).
>
>                 Here is what I did:
>
>                 In the root directory of HDFS:
>                 /$ ant injectfaults
>
>                 $ ant jar-fault-inject
>                 /At this point, i have a jar file containing hdfs
>         classed, namely,
>                 /hadoop-hdfs-0.22.0-dev-fi.jar/, located in /build-fi/
>         folder.
>
>                 Now I go to the HADOOP folder (which contains running
>         script in bin
>                 directory), and do the following
>                 /$ ant compile-core-classes/
>                 ( now I need additional hdfs classes to be able to run
>                 /start-dfs.sh/ <http://start-dfs.sh/>
>         <http://start-dfs.sh/>,
>
>                 right)
>                 What I did is copying
>                 /$HDFS/build-fi/hadoop-hdfs-0.22.0-dev-fi.jar /to
>                 /$HADOOP/hadoop-hdfs-fi-core.jar/ (I need to add suffix
>         "core"
>                 since the
>                 script will include all hadoop-*-core.jar in classpath)
>
>                 /$ bin/start-dfs.sh/ <http://start-dfs.sh/>
>         <http://start-dfs.sh/>
>
>                 and got error message:
>
>                 2009-11-19 11:52:57,479 ERROR
>                 org.apache.hadoop.hdfs.server.namenode.NameNode:
>                 java.lang.NoSuchMethodError:
>
>           org.apache.hadoop.conf.Configuration.addDeprecation(Ljava/lang/String;[Ljava/lang/String;)V
>                          at
>
>           org.apache.hadoop.hdfs.HdfsConfiguration.deprecate(HdfsConfiguration.java:44)
>                          at
>
>           org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:48)
>                          at
>
>           org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:28)
>                          at
>
>           org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
>                          at
>
>           org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1199)
>
>                 2009-11-19 11:52:57,480 INFO
>                 org.apache.hadoop.hdfs.server.namenode.NameNode:
>         SHUTDOWN_MSG:
>
>                 Could any one tell me how to solve this problem?
>
>                 Thank you so much.
>
>
>                 On Thu, Oct 8, 2009 at 10:41 AM, Konstantin Boudnik
>         <cos@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:cos@yahoo-inc.com <ma...@yahoo-inc.com>>
>         <mailto:cos@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:cos@yahoo-inc.com <ma...@yahoo-inc.com>>>> wrote:
>
>                     Thanks for looking into fault injection - it's very
>                 interesting and
>                     useful technique based on AspectJ.
>
>                     Currently, it is fully integrated into HDFS only.
>         There's a JIRA
>                     (HADOOP-6204) which tracks the same effort for
>         Common and
>                 then all
>                     Hadoop's components will have injection (as well as
>         fault
>                 injection)
>                     in place. This JIRA should be committed in the
>         matter of a
>                 couple of
>                     weeks.
>
>                     For the immediate purpose you don't need to patch
>         anything
>                 or do any
>                     tweaking of the code: the fault injection framework
>         is in
>                 already
>                     and ready to work.
>
>                     For your current needs: to be able to run HDFS with
>         instrumented
>                     code you need to run a special build. To do so:
>                       - % ant injectfaults - similar to a 'normal'
>         build, but does
>                     instrument the code with aspects located under
>         src/test/aop/**
>                       - % ant jar-fault-inject - similar to a 'normal' jar
>                 creation but
>                     instrumented
>                       - % ant jar-test-fault-inject - similar to a
>         'normal' jar-test
>                     creation but instrumented
>
>                     Now, if you have the rest of sub-projects built you
>         need to
>                 move the
>                     instrumented jar files on top of the 'normal' files
>         in your
>                     installation directory. Please note that some
>         renaming has to be
>                     done: injected jar files have '-fi' suffix in their
>         names
>                 and normal
>                     jar files don't have such. Thus currently you'll
>         have to rename
>                     those injected jars to pretend like they are normal,
>         used by
>                     configured's classpath.
>
>                     At this point you all set: you have a production quality
>                 Hadoop with
>                     injected HDFS. As soon as the aforementioned JIRA is
>         ready and
>                     committed we'd be able to provide Hadoop-injected
>         version by the
>                     build's means rather than doing any renaming and manual
>                 intervention.
>
>                     Also, if you need to read more about fault injection
>         (FI) in
>                 HDFS
>                     you can find FI-framework documentation in the
>         current HDFS
>                 trunk
>                     (it isn't on the web yet for version 0.21 hasn't been
>                 released yet).
>                     Because building documentation requires some extra
>         effort and
>                     additional software to be installed, you can simply
>         download and
>                     read the PDF from this FI-framework JIRA
>
>         https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf
>
>                     Hope it helps,
>                       Cos
>
>
>                     On 10/8/09 8:10 AM, Thanh Do wrote:
>
>                         Thank you so much, Jakob.
>
>                         Could you please explain the fault injection running
>                 procedure
>                         in details?
>
>                         My goal is running HDFS in a cluster (with a
>         namenode
>                 and several
>                         datanode), and see how fault injection
>         techniques affect
>                 HDFS
>                         behavior's. Also, I would like to define some new
>                 aspects/fault
>                         to test
>                         the system.
>
>                         What I did was:
>                         1) I checked out the hadoop-common-trunk, but this
>                 package doesn't
>                         contain HDFS classes. I finally noticed that FI
>         framework is
>                         currently
>                         integrated with HDFS only.
>
>                         2) So, I checked out the hdfs-trunk. The
>         build.xml contain
>                         injectfaults
>                         target and several other related things. I was
>         able to
>                 build those
>                         targets (injectfaults,
>         run-test-hdfs-fault-inject, etc).
>                 Up to this
>                         point, I stucked because I found no scripted
>         that help me to
>                         start-dfs,
>                         stop-dfs...
>                         I copied the bin folder from common/core to HDFS
>         project
>                 folder
>                         and ran
>                         the  script:
>
>                         /bin/start-dfs.sh/ <http://start-dfs.sh/>
>         <http://start-dfs.sh/>
>         <http://start-dfs.sh/>
>
>
>                         but there is exception:
>
>                         /Exception in thread
>                         main"Java.lang.NoClassDefFoundError
>                         : org/apache/commons/logging/LogFactory
>                         /
>                         I guess the reason is I ran HDFS without any common
>                 class. How I get
>                         around this?
>
>                         3) I also tried the third way, by download the
>         hadoop
>                 release
>                         (contain
>                         everything: core, hdfs, mapred), and used
>         Eclipse to create
>                         project from
>                         existing code. I was able to build this project.
>         The bin
>                 scripts
>                         worked
>                         well but I found know FI related classes. What I
>         did was
>                 apply
>                         the patch
>                         (HADOOP-6003.patch) using Eclipse patch command
>         (Team |
>                 apply
>                         patch),
>                         but I failed the patching procedure.
>
>                         In summary, I would like to run a real HDFS with
>         fault
>                         injection. I am
>                         not very familiar with ant. Could you please show me
>                 some more
>                         details,
>                         so that I could get around this?
>
>                         On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan
>         <jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>>>>
>         wrote:
>
>                             Thanh-
>                             If you would like the run execute the tests that
>                 have been
>                             instrumented to use the fault injection
>         framework
>                 the ant
>                         target is
>                             run-test-hdfs-fault-inject.  These were used
>                 extensively in the
>                             recent append work and there are quite a few
>                 append-related
>                         tests.
>                               Was there something more specific you were
>         looking
>                 for?
>
>                             Thanks,
>                             Jakob
>                             Hadoop at Yahoo!
>
>
>                             Thanh Do wrote:
>
>                                 Hi everyone,
>
>                                 Could any body so me how to run the
>         fault injection
>                         framework
>                                 mentioned in the following links?:
>
>         http://issues.apache.org/jira/browse/HDFS-435
>
>                                 and
>
>         https://issues.apache.org/jira/browse/HDFS-436
>
>                                 Thanks,
>                                 Thanh
>
>
>
>
>
>
>                         --
>                         T
>
>
>                     --
>                     With best regards,
>                             Konstantin Boudnik (aka Cos)
>
>                             Yahoo! Grid Computing
>                             +1 (408) 349-4049
>
>                     2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
>                     Attention! Streams of consciousness are disallowed
>
>
>
>
>                 --
>                 thanh
>
>
>
>
>         --
>         thanh
>
>
>
>
> --
> thanh

Re: How to run Fault injection in HDFS

Posted by Thanh Do <th...@cs.wisc.edu>.
The reason I changed the *build.xml* is that *build.xml* in the
hadoop-common trunk release (0.20.1) does not contain *injectfaults* target
( I wanna use AspectJ in the hadoop release that contains both hdfs and
mapred). I just add following two targets.

      <target name="compile-fault-inject" depends="compile-hdfs-classes">
        <!-- AspectJ task definition -->
        <taskdef

resource="org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties">
          <classpath>
            <pathelement
location="${common.ivy.lib.dir}/aspectjtools-1.6.4.jar"/>
          </classpath>
        </taskdef>
        <echo message="Start weaving aspects in place"/>
        <iajc
          encoding="${build.encoding}"
          srcdir="${hdfs.src.dir};${build.src}"
          includes="org/apache/hadoop/**/*.java, *
org/apache/hadoop/myaspect/**/*.aj*"
          destDir="${build.classes}"
          debug="${javac.debug}"
          target="${javac.version}"
          source="${javac.version}"
          deprecation="${javac.deprecation}">
          <classpath refid="test.classpath"/>
        </iajc>
        <echo message="Weaving of aspects is finished"/>
      </target>

      <target name="injectfaults" description="Instrument HDFS classes with
faults and other AOP advices">
        <subant buildpath="${basedir}" target="compile-fault-inject">
          <property name="build.dir" value="${build.dir}"/>
        </subant>
      </target>

So that, when I want to weave my aspect, I only type:

*ant injectfaults*


On Fri, Nov 20, 2009 at 3:21 PM, Konstantin Boudnik <co...@yahoo-inc.com>wrote:

> Generally the idea was to provide everything needed for injection by what
> current build.xml is having in Common and Hdfs. Would you mind to share what
> extra changes you've needed and why?
>
> Cos
>
>
> On 11/20/09 12:32 , Thanh Do wrote:
>
>> Thank you folks!
>>
>> Finally, I am able (really) to run FI with HADOOP. I added some aspects
>> into the source code, changed the build.xml, and that's it.
>>
>> AspectJ is awesome!
>>
>> Have a nice weekend!
>>
>> On Fri, Nov 20, 2009 at 1:08 PM, Konstantin Boudnik <cos@yahoo-inc.com
>> <ma...@yahoo-inc.com>> wrote:
>>
>>    Hi Thanh.
>>
>>    hmm, it sounds like you have some issue with compilation of your code.
>>
>>    addDeprication() has been added to Configuration in 0.21, I believe.
>>    And it is there no matter how do you compile your code (with FI or
>>    without).
>>
>>    Cos
>>
>>
>>    On 11/19/09 10:12 , Thanh Do wrote:
>>
>>        Sorry to dig this thread again!
>>
>>        I am expecting the release of 0.21 so that I don't have to
>>        manually play
>>        around with AspectJ FI any more.
>>
>>        I still have problem with running HDFS with instrumented code
>>        (with aspect).
>>
>>        Here is what I did:
>>
>>        In the root directory of HDFS:
>>        /$ ant injectfaults
>>
>>        $ ant jar-fault-inject
>>        /At this point, i have a jar file containing hdfs classed, namely,
>>        /hadoop-hdfs-0.22.0-dev-fi.jar/, located in /build-fi/ folder.
>>
>>        Now I go to the HADOOP folder (which contains running script in bin
>>        directory), and do the following
>>        /$ ant compile-core-classes/
>>        ( now I need additional hdfs classes to be able to run
>>        /start-dfs.sh/ <http://start-dfs.sh/>,
>>
>>        right)
>>        What I did is copying
>>        /$HDFS/build-fi/hadoop-hdfs-0.22.0-dev-fi.jar /to
>>        /$HADOOP/hadoop-hdfs-fi-core.jar/ (I need to add suffix "core"
>>        since the
>>        script will include all hadoop-*-core.jar in classpath)
>>
>>        /$ bin/start-dfs.sh/ <http://start-dfs.sh/>
>>
>>        and got error message:
>>
>>        2009-11-19 11:52:57,479 ERROR
>>        org.apache.hadoop.hdfs.server.namenode.NameNode:
>>        java.lang.NoSuchMethodError:
>>
>>  org.apache.hadoop.conf.Configuration.addDeprecation(Ljava/lang/String;[Ljava/lang/String;)V
>>                 at
>>
>>  org.apache.hadoop.hdfs.HdfsConfiguration.deprecate(HdfsConfiguration.java:44)
>>                 at
>>
>>  org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:48)
>>                 at
>>
>>  org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:28)
>>                 at
>>
>>  org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
>>                 at
>>
>>  org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1199)
>>
>>        2009-11-19 11:52:57,480 INFO
>>        org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>
>>        Could any one tell me how to solve this problem?
>>
>>        Thank you so much.
>>
>>
>>        On Thu, Oct 8, 2009 at 10:41 AM, Konstantin Boudnik
>>        <cos@yahoo-inc.com <ma...@yahoo-inc.com>
>>        <mailto:cos@yahoo-inc.com <ma...@yahoo-inc.com>>> wrote:
>>
>>            Thanks for looking into fault injection - it's very
>>        interesting and
>>            useful technique based on AspectJ.
>>
>>            Currently, it is fully integrated into HDFS only. There's a
>> JIRA
>>            (HADOOP-6204) which tracks the same effort for Common and
>>        then all
>>            Hadoop's components will have injection (as well as fault
>>        injection)
>>            in place. This JIRA should be committed in the matter of a
>>        couple of
>>            weeks.
>>
>>            For the immediate purpose you don't need to patch anything
>>        or do any
>>            tweaking of the code: the fault injection framework is in
>>        already
>>            and ready to work.
>>
>>            For your current needs: to be able to run HDFS with
>> instrumented
>>            code you need to run a special build. To do so:
>>              - % ant injectfaults - similar to a 'normal' build, but does
>>            instrument the code with aspects located under src/test/aop/**
>>              - % ant jar-fault-inject - similar to a 'normal' jar
>>        creation but
>>            instrumented
>>              - % ant jar-test-fault-inject - similar to a 'normal'
>> jar-test
>>            creation but instrumented
>>
>>            Now, if you have the rest of sub-projects built you need to
>>        move the
>>            instrumented jar files on top of the 'normal' files in your
>>            installation directory. Please note that some renaming has to
>> be
>>            done: injected jar files have '-fi' suffix in their names
>>        and normal
>>            jar files don't have such. Thus currently you'll have to rename
>>            those injected jars to pretend like they are normal, used by
>>            configured's classpath.
>>
>>            At this point you all set: you have a production quality
>>        Hadoop with
>>            injected HDFS. As soon as the aforementioned JIRA is ready and
>>            committed we'd be able to provide Hadoop-injected version by
>> the
>>            build's means rather than doing any renaming and manual
>>        intervention.
>>
>>            Also, if you need to read more about fault injection (FI) in
>>        HDFS
>>            you can find FI-framework documentation in the current HDFS
>>        trunk
>>            (it isn't on the web yet for version 0.21 hasn't been
>>        released yet).
>>            Because building documentation requires some extra effort and
>>            additional software to be installed, you can simply download
>> and
>>            read the PDF from this FI-framework JIRA
>>
>>
>> https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf
>>
>>            Hope it helps,
>>              Cos
>>
>>
>>            On 10/8/09 8:10 AM, Thanh Do wrote:
>>
>>                Thank you so much, Jakob.
>>
>>                Could you please explain the fault injection running
>>        procedure
>>                in details?
>>
>>                My goal is running HDFS in a cluster (with a namenode
>>        and several
>>                datanode), and see how fault injection techniques affect
>>        HDFS
>>                behavior's. Also, I would like to define some new
>>        aspects/fault
>>                to test
>>                the system.
>>
>>                What I did was:
>>                1) I checked out the hadoop-common-trunk, but this
>>        package doesn't
>>                contain HDFS classes. I finally noticed that FI framework
>> is
>>                currently
>>                integrated with HDFS only.
>>
>>                2) So, I checked out the hdfs-trunk. The build.xml contain
>>                injectfaults
>>                target and several other related things. I was able to
>>        build those
>>                targets (injectfaults, run-test-hdfs-fault-inject, etc).
>>        Up to this
>>                point, I stucked because I found no scripted that help me
>> to
>>                start-dfs,
>>                stop-dfs...
>>                I copied the bin folder from common/core to HDFS project
>>        folder
>>                and ran
>>                the  script:
>>
>>                /bin/start-dfs.sh/ <http://start-dfs.sh/>
>>        <http://start-dfs.sh/>
>>
>>
>>                but there is exception:
>>
>>                /Exception in thread
>>                main"Java.lang.NoClassDefFoundError
>>                : org/apache/commons/logging/LogFactory
>>                /
>>                I guess the reason is I ran HDFS without any common
>>        class. How I get
>>                around this?
>>
>>                3) I also tried the third way, by download the hadoop
>>        release
>>                (contain
>>                everything: core, hdfs, mapred), and used Eclipse to create
>>                project from
>>                existing code. I was able to build this project. The bin
>>        scripts
>>                worked
>>                well but I found know FI related classes. What I did was
>>        apply
>>                the patch
>>                (HADOOP-6003.patch) using Eclipse patch command (Team |
>>        apply
>>                patch),
>>                but I failed the patching procedure.
>>
>>                In summary, I would like to run a real HDFS with fault
>>                injection. I am
>>                not very familiar with ant. Could you please show me
>>        some more
>>                details,
>>                so that I could get around this?
>>
>>                On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan
>>        <jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>>        <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>
>>        <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>>        <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>>>
>> wrote:
>>
>>                    Thanh-
>>                    If you would like the run execute the tests that
>>        have been
>>                    instrumented to use the fault injection framework
>>        the ant
>>                target is
>>                    run-test-hdfs-fault-inject.  These were used
>>        extensively in the
>>                    recent append work and there are quite a few
>>        append-related
>>                tests.
>>                      Was there something more specific you were looking
>>        for?
>>
>>                    Thanks,
>>                    Jakob
>>                    Hadoop at Yahoo!
>>
>>
>>                    Thanh Do wrote:
>>
>>                        Hi everyone,
>>
>>                        Could any body so me how to run the fault injection
>>                framework
>>                        mentioned in the following links?:
>>
>>        http://issues.apache.org/jira/browse/HDFS-435
>>
>>                        and
>>
>>        https://issues.apache.org/jira/browse/HDFS-436
>>
>>                        Thanks,
>>                        Thanh
>>
>>
>>
>>
>>
>>
>>                --
>>                T
>>
>>
>>            --
>>            With best regards,
>>                    Konstantin Boudnik (aka Cos)
>>
>>                    Yahoo! Grid Computing
>>                    +1 (408) 349-4049
>>
>>            2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
>>            Attention! Streams of consciousness are disallowed
>>
>>
>>
>>
>>        --
>>        thanh
>>
>>
>>
>>
>> --
>> thanh
>>
>


-- 
thanh

Re: How to run Fault injection in HDFS

Posted by Konstantin Boudnik <co...@yahoo-inc.com>.
Generally the idea was to provide everything needed for injection by what 
current build.xml is having in Common and Hdfs. Would you mind to share what 
extra changes you've needed and why?

Cos

On 11/20/09 12:32 , Thanh Do wrote:
> Thank you folks!
>
> Finally, I am able (really) to run FI with HADOOP. I added some aspects
> into the source code, changed the build.xml, and that's it.
>
> AspectJ is awesome!
>
> Have a nice weekend!
>
> On Fri, Nov 20, 2009 at 1:08 PM, Konstantin Boudnik <cos@yahoo-inc.com
> <ma...@yahoo-inc.com>> wrote:
>
>     Hi Thanh.
>
>     hmm, it sounds like you have some issue with compilation of your code.
>
>     addDeprication() has been added to Configuration in 0.21, I believe.
>     And it is there no matter how do you compile your code (with FI or
>     without).
>
>     Cos
>
>
>     On 11/19/09 10:12 , Thanh Do wrote:
>
>         Sorry to dig this thread again!
>
>         I am expecting the release of 0.21 so that I don't have to
>         manually play
>         around with AspectJ FI any more.
>
>         I still have problem with running HDFS with instrumented code
>         (with aspect).
>
>         Here is what I did:
>
>         In the root directory of HDFS:
>         /$ ant injectfaults
>
>         $ ant jar-fault-inject
>         /At this point, i have a jar file containing hdfs classed, namely,
>         /hadoop-hdfs-0.22.0-dev-fi.jar/, located in /build-fi/ folder.
>
>         Now I go to the HADOOP folder (which contains running script in bin
>         directory), and do the following
>         /$ ant compile-core-classes/
>         ( now I need additional hdfs classes to be able to run
>         /start-dfs.sh/ <http://start-dfs.sh/>,
>         right)
>         What I did is copying
>         /$HDFS/build-fi/hadoop-hdfs-0.22.0-dev-fi.jar /to
>         /$HADOOP/hadoop-hdfs-fi-core.jar/ (I need to add suffix "core"
>         since the
>         script will include all hadoop-*-core.jar in classpath)
>
>         /$ bin/start-dfs.sh/ <http://start-dfs.sh/>
>         and got error message:
>
>         2009-11-19 11:52:57,479 ERROR
>         org.apache.hadoop.hdfs.server.namenode.NameNode:
>         java.lang.NoSuchMethodError:
>         org.apache.hadoop.conf.Configuration.addDeprecation(Ljava/lang/String;[Ljava/lang/String;)V
>                  at
>         org.apache.hadoop.hdfs.HdfsConfiguration.deprecate(HdfsConfiguration.java:44)
>                  at
>         org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:48)
>                  at
>         org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:28)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1199)
>
>         2009-11-19 11:52:57,480 INFO
>         org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>         Could any one tell me how to solve this problem?
>
>         Thank you so much.
>
>
>         On Thu, Oct 8, 2009 at 10:41 AM, Konstantin Boudnik
>         <cos@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:cos@yahoo-inc.com <ma...@yahoo-inc.com>>> wrote:
>
>             Thanks for looking into fault injection - it's very
>         interesting and
>             useful technique based on AspectJ.
>
>             Currently, it is fully integrated into HDFS only. There's a JIRA
>             (HADOOP-6204) which tracks the same effort for Common and
>         then all
>             Hadoop's components will have injection (as well as fault
>         injection)
>             in place. This JIRA should be committed in the matter of a
>         couple of
>             weeks.
>
>             For the immediate purpose you don't need to patch anything
>         or do any
>             tweaking of the code: the fault injection framework is in
>         already
>             and ready to work.
>
>             For your current needs: to be able to run HDFS with instrumented
>             code you need to run a special build. To do so:
>               - % ant injectfaults - similar to a 'normal' build, but does
>             instrument the code with aspects located under src/test/aop/**
>               - % ant jar-fault-inject - similar to a 'normal' jar
>         creation but
>             instrumented
>               - % ant jar-test-fault-inject - similar to a 'normal' jar-test
>             creation but instrumented
>
>             Now, if you have the rest of sub-projects built you need to
>         move the
>             instrumented jar files on top of the 'normal' files in your
>             installation directory. Please note that some renaming has to be
>             done: injected jar files have '-fi' suffix in their names
>         and normal
>             jar files don't have such. Thus currently you'll have to rename
>             those injected jars to pretend like they are normal, used by
>             configured's classpath.
>
>             At this point you all set: you have a production quality
>         Hadoop with
>             injected HDFS. As soon as the aforementioned JIRA is ready and
>             committed we'd be able to provide Hadoop-injected version by the
>             build's means rather than doing any renaming and manual
>         intervention.
>
>             Also, if you need to read more about fault injection (FI) in
>         HDFS
>             you can find FI-framework documentation in the current HDFS
>         trunk
>             (it isn't on the web yet for version 0.21 hasn't been
>         released yet).
>             Because building documentation requires some extra effort and
>             additional software to be installed, you can simply download and
>             read the PDF from this FI-framework JIRA
>
>         https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf
>
>             Hope it helps,
>               Cos
>
>
>             On 10/8/09 8:10 AM, Thanh Do wrote:
>
>                 Thank you so much, Jakob.
>
>                 Could you please explain the fault injection running
>         procedure
>                 in details?
>
>                 My goal is running HDFS in a cluster (with a namenode
>         and several
>                 datanode), and see how fault injection techniques affect
>         HDFS
>                 behavior's. Also, I would like to define some new
>         aspects/fault
>                 to test
>                 the system.
>
>                 What I did was:
>                 1) I checked out the hadoop-common-trunk, but this
>         package doesn't
>                 contain HDFS classes. I finally noticed that FI framework is
>                 currently
>                 integrated with HDFS only.
>
>                 2) So, I checked out the hdfs-trunk. The build.xml contain
>                 injectfaults
>                 target and several other related things. I was able to
>         build those
>                 targets (injectfaults, run-test-hdfs-fault-inject, etc).
>         Up to this
>                 point, I stucked because I found no scripted that help me to
>                 start-dfs,
>                 stop-dfs...
>                 I copied the bin folder from common/core to HDFS project
>         folder
>                 and ran
>                 the  script:
>
>                 /bin/start-dfs.sh/ <http://start-dfs.sh/>
>         <http://start-dfs.sh/>
>
>
>                 but there is exception:
>
>                 /Exception in thread
>                 main"Java.lang.NoClassDefFoundError
>                 : org/apache/commons/logging/LogFactory
>                 /
>                 I guess the reason is I ran HDFS without any common
>         class. How I get
>                 around this?
>
>                 3) I also tried the third way, by download the hadoop
>         release
>                 (contain
>                 everything: core, hdfs, mapred), and used Eclipse to create
>                 project from
>                 existing code. I was able to build this project. The bin
>         scripts
>                 worked
>                 well but I found know FI related classes. What I did was
>         apply
>                 the patch
>                 (HADOOP-6003.patch) using Eclipse patch command (Team |
>         apply
>                 patch),
>                 but I failed the patching procedure.
>
>                 In summary, I would like to run a real HDFS with fault
>                 injection. I am
>                 not very familiar with ant. Could you please show me
>         some more
>                 details,
>                 so that I could get around this?
>
>                 On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan
>         <jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>>> wrote:
>
>                     Thanh-
>                     If you would like the run execute the tests that
>         have been
>                     instrumented to use the fault injection framework
>         the ant
>                 target is
>                     run-test-hdfs-fault-inject.  These were used
>         extensively in the
>                     recent append work and there are quite a few
>         append-related
>                 tests.
>                       Was there something more specific you were looking
>         for?
>
>                     Thanks,
>                     Jakob
>                     Hadoop at Yahoo!
>
>
>                     Thanh Do wrote:
>
>                         Hi everyone,
>
>                         Could any body so me how to run the fault injection
>                 framework
>                         mentioned in the following links?:
>
>         http://issues.apache.org/jira/browse/HDFS-435
>
>                         and
>
>         https://issues.apache.org/jira/browse/HDFS-436
>
>                         Thanks,
>                         Thanh
>
>
>
>
>
>
>                 --
>                 T
>
>
>             --
>             With best regards,
>                     Konstantin Boudnik (aka Cos)
>
>                     Yahoo! Grid Computing
>                     +1 (408) 349-4049
>
>             2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
>             Attention! Streams of consciousness are disallowed
>
>
>
>
>         --
>         thanh
>
>
>
>
> --
> thanh

Re: How to run Fault injection in HDFS

Posted by Thanh Do <th...@cs.wisc.edu>.
Thank you folks!

Finally, I am able (really) to run FI with HADOOP. I added some aspects into
the source code, changed the build.xml, and that's it.

AspectJ is awesome!

Have a nice weekend!

On Fri, Nov 20, 2009 at 1:08 PM, Konstantin Boudnik <co...@yahoo-inc.com>wrote:

> Hi Thanh.
>
> hmm, it sounds like you have some issue with compilation of your code.
>
> addDeprication() has been added to Configuration in 0.21, I believe. And it
> is there no matter how do you compile your code (with FI or without).
>
> Cos
>
>
> On 11/19/09 10:12 , Thanh Do wrote:
>
>> Sorry to dig this thread again!
>>
>> I am expecting the release of 0.21 so that I don't have to manually play
>> around with AspectJ FI any more.
>>
>> I still have problem with running HDFS with instrumented code (with
>> aspect).
>>
>> Here is what I did:
>>
>> In the root directory of HDFS:
>> /$ ant injectfaults
>>
>> $ ant jar-fault-inject
>> /At this point, i have a jar file containing hdfs classed, namely,
>> /hadoop-hdfs-0.22.0-dev-fi.jar/, located in /build-fi/ folder.
>>
>> Now I go to the HADOOP folder (which contains running script in bin
>> directory), and do the following
>> /$ ant compile-core-classes/
>> ( now I need additional hdfs classes to be able to run /start-dfs.sh/,
>> right)
>> What I did is copying /$HDFS/build-fi/hadoop-hdfs-0.22.0-dev-fi.jar /to
>> /$HADOOP/hadoop-hdfs-fi-core.jar/ (I need to add suffix "core" since the
>> script will include all hadoop-*-core.jar in classpath)
>>
>> /$ bin/start-dfs.sh/
>> and got error message:
>>
>> 2009-11-19 11:52:57,479 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.NoSuchMethodError:
>>
>> org.apache.hadoop.conf.Configuration.addDeprecation(Ljava/lang/String;[Ljava/lang/String;)V
>>         at
>>
>> org.apache.hadoop.hdfs.HdfsConfiguration.deprecate(HdfsConfiguration.java:44)
>>         at
>>
>> org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:48)
>>         at
>>
>> org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:28)
>>         at
>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1199)
>>
>> 2009-11-19 11:52:57,480 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>
>> Could any one tell me how to solve this problem?
>>
>> Thank you so much.
>>
>>
>> On Thu, Oct 8, 2009 at 10:41 AM, Konstantin Boudnik <cos@yahoo-inc.com
>> <ma...@yahoo-inc.com>> wrote:
>>
>>    Thanks for looking into fault injection - it's very interesting and
>>    useful technique based on AspectJ.
>>
>>    Currently, it is fully integrated into HDFS only. There's a JIRA
>>    (HADOOP-6204) which tracks the same effort for Common and then all
>>    Hadoop's components will have injection (as well as fault injection)
>>    in place. This JIRA should be committed in the matter of a couple of
>>    weeks.
>>
>>    For the immediate purpose you don't need to patch anything or do any
>>    tweaking of the code: the fault injection framework is in already
>>    and ready to work.
>>
>>    For your current needs: to be able to run HDFS with instrumented
>>    code you need to run a special build. To do so:
>>      - % ant injectfaults - similar to a 'normal' build, but does
>>    instrument the code with aspects located under src/test/aop/**
>>      - % ant jar-fault-inject - similar to a 'normal' jar creation but
>>    instrumented
>>      - % ant jar-test-fault-inject - similar to a 'normal' jar-test
>>    creation but instrumented
>>
>>    Now, if you have the rest of sub-projects built you need to move the
>>    instrumented jar files on top of the 'normal' files in your
>>    installation directory. Please note that some renaming has to be
>>    done: injected jar files have '-fi' suffix in their names and normal
>>    jar files don't have such. Thus currently you'll have to rename
>>    those injected jars to pretend like they are normal, used by
>>    configured's classpath.
>>
>>    At this point you all set: you have a production quality Hadoop with
>>    injected HDFS. As soon as the aforementioned JIRA is ready and
>>    committed we'd be able to provide Hadoop-injected version by the
>>    build's means rather than doing any renaming and manual intervention.
>>
>>    Also, if you need to read more about fault injection (FI) in HDFS
>>    you can find FI-framework documentation in the current HDFS trunk
>>    (it isn't on the web yet for version 0.21 hasn't been released yet).
>>    Because building documentation requires some extra effort and
>>    additional software to be installed, you can simply download and
>>    read the PDF from this FI-framework JIRA
>>
>>
>> https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf
>>
>>    Hope it helps,
>>      Cos
>>
>>
>>    On 10/8/09 8:10 AM, Thanh Do wrote:
>>
>>        Thank you so much, Jakob.
>>
>>        Could you please explain the fault injection running procedure
>>        in details?
>>
>>        My goal is running HDFS in a cluster (with a namenode and several
>>        datanode), and see how fault injection techniques affect HDFS
>>        behavior's. Also, I would like to define some new aspects/fault
>>        to test
>>        the system.
>>
>>        What I did was:
>>        1) I checked out the hadoop-common-trunk, but this package doesn't
>>        contain HDFS classes. I finally noticed that FI framework is
>>        currently
>>        integrated with HDFS only.
>>
>>        2) So, I checked out the hdfs-trunk. The build.xml contain
>>        injectfaults
>>        target and several other related things. I was able to build those
>>        targets (injectfaults, run-test-hdfs-fault-inject, etc). Up to this
>>        point, I stucked because I found no scripted that help me to
>>        start-dfs,
>>        stop-dfs...
>>        I copied the bin folder from common/core to HDFS project folder
>>        and ran
>>        the  script:
>>
>>        /bin/start-dfs.sh/ <http://start-dfs.sh/>
>>
>>
>>        but there is exception:
>>
>>        /Exception in thread
>>        main"Java.lang.NoClassDefFoundError
>>        : org/apache/commons/logging/LogFactory
>>        /
>>        I guess the reason is I ran HDFS without any common class. How I
>> get
>>        around this?
>>
>>        3) I also tried the third way, by download the hadoop release
>>        (contain
>>        everything: core, hdfs, mapred), and used Eclipse to create
>>        project from
>>        existing code. I was able to build this project. The bin scripts
>>        worked
>>        well but I found know FI related classes. What I did was apply
>>        the patch
>>        (HADOOP-6003.patch) using Eclipse patch command (Team | apply
>>        patch),
>>        but I failed the patching procedure.
>>
>>        In summary, I would like to run a real HDFS with fault
>>        injection. I am
>>        not very familiar with ant. Could you please show me some more
>>        details,
>>        so that I could get around this?
>>
>>        On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan
>>        <jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>>        <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>>
>> wrote:
>>
>>            Thanh-
>>            If you would like the run execute the tests that have been
>>            instrumented to use the fault injection framework the ant
>>        target is
>>            run-test-hdfs-fault-inject.  These were used extensively in the
>>            recent append work and there are quite a few append-related
>>        tests.
>>              Was there something more specific you were looking for?
>>
>>            Thanks,
>>            Jakob
>>            Hadoop at Yahoo!
>>
>>
>>            Thanh Do wrote:
>>
>>                Hi everyone,
>>
>>                Could any body so me how to run the fault injection
>>        framework
>>                mentioned in the following links?:
>>
>>        http://issues.apache.org/jira/browse/HDFS-435
>>
>>                and
>>
>>        https://issues.apache.org/jira/browse/HDFS-436
>>
>>                Thanks,
>>                Thanh
>>
>>
>>
>>
>>
>>
>>        --
>>        T
>>
>>
>>    --
>>    With best regards,
>>            Konstantin Boudnik (aka Cos)
>>
>>            Yahoo! Grid Computing
>>            +1 (408) 349-4049
>>
>>    2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
>>    Attention! Streams of consciousness are disallowed
>>
>>
>>
>>
>> --
>> thanh
>>
>


-- 
thanh

Re: How to run Fault injection in HDFS

Posted by Konstantin Boudnik <co...@yahoo-inc.com>.
Hi Thanh.

hmm, it sounds like you have some issue with compilation of your code.

addDeprication() has been added to Configuration in 0.21, I believe. And it is 
there no matter how do you compile your code (with FI or without).

Cos

On 11/19/09 10:12 , Thanh Do wrote:
> Sorry to dig this thread again!
>
> I am expecting the release of 0.21 so that I don't have to manually play
> around with AspectJ FI any more.
>
> I still have problem with running HDFS with instrumented code (with aspect).
>
> Here is what I did:
>
> In the root directory of HDFS:
> /$ ant injectfaults
>
> $ ant jar-fault-inject
> /At this point, i have a jar file containing hdfs classed, namely,
> /hadoop-hdfs-0.22.0-dev-fi.jar/, located in /build-fi/ folder.
>
> Now I go to the HADOOP folder (which contains running script in bin
> directory), and do the following
> /$ ant compile-core-classes/
> ( now I need additional hdfs classes to be able to run /start-dfs.sh/,
> right)
> What I did is copying /$HDFS/build-fi/hadoop-hdfs-0.22.0-dev-fi.jar /to
> /$HADOOP/hadoop-hdfs-fi-core.jar/ (I need to add suffix "core" since the
> script will include all hadoop-*-core.jar in classpath)
>
> /$ bin/start-dfs.sh/
> and got error message:
>
> 2009-11-19 11:52:57,479 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.NoSuchMethodError:
> org.apache.hadoop.conf.Configuration.addDeprecation(Ljava/lang/String;[Ljava/lang/String;)V
>          at
> org.apache.hadoop.hdfs.HdfsConfiguration.deprecate(HdfsConfiguration.java:44)
>          at
> org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:48)
>          at
> org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:28)
>          at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
>          at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1199)
>
> 2009-11-19 11:52:57,480 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
> Could any one tell me how to solve this problem?
>
> Thank you so much.
>
>
> On Thu, Oct 8, 2009 at 10:41 AM, Konstantin Boudnik <cos@yahoo-inc.com
> <ma...@yahoo-inc.com>> wrote:
>
>     Thanks for looking into fault injection - it's very interesting and
>     useful technique based on AspectJ.
>
>     Currently, it is fully integrated into HDFS only. There's a JIRA
>     (HADOOP-6204) which tracks the same effort for Common and then all
>     Hadoop's components will have injection (as well as fault injection)
>     in place. This JIRA should be committed in the matter of a couple of
>     weeks.
>
>     For the immediate purpose you don't need to patch anything or do any
>     tweaking of the code: the fault injection framework is in already
>     and ready to work.
>
>     For your current needs: to be able to run HDFS with instrumented
>     code you need to run a special build. To do so:
>       - % ant injectfaults - similar to a 'normal' build, but does
>     instrument the code with aspects located under src/test/aop/**
>       - % ant jar-fault-inject - similar to a 'normal' jar creation but
>     instrumented
>       - % ant jar-test-fault-inject - similar to a 'normal' jar-test
>     creation but instrumented
>
>     Now, if you have the rest of sub-projects built you need to move the
>     instrumented jar files on top of the 'normal' files in your
>     installation directory. Please note that some renaming has to be
>     done: injected jar files have '-fi' suffix in their names and normal
>     jar files don't have such. Thus currently you'll have to rename
>     those injected jars to pretend like they are normal, used by
>     configured's classpath.
>
>     At this point you all set: you have a production quality Hadoop with
>     injected HDFS. As soon as the aforementioned JIRA is ready and
>     committed we'd be able to provide Hadoop-injected version by the
>     build's means rather than doing any renaming and manual intervention.
>
>     Also, if you need to read more about fault injection (FI) in HDFS
>     you can find FI-framework documentation in the current HDFS trunk
>     (it isn't on the web yet for version 0.21 hasn't been released yet).
>     Because building documentation requires some extra effort and
>     additional software to be installed, you can simply download and
>     read the PDF from this FI-framework JIRA
>
>     https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf
>
>     Hope it helps,
>       Cos
>
>
>     On 10/8/09 8:10 AM, Thanh Do wrote:
>
>         Thank you so much, Jakob.
>
>         Could you please explain the fault injection running procedure
>         in details?
>
>         My goal is running HDFS in a cluster (with a namenode and several
>         datanode), and see how fault injection techniques affect HDFS
>         behavior's. Also, I would like to define some new aspects/fault
>         to test
>         the system.
>
>         What I did was:
>         1) I checked out the hadoop-common-trunk, but this package doesn't
>         contain HDFS classes. I finally noticed that FI framework is
>         currently
>         integrated with HDFS only.
>
>         2) So, I checked out the hdfs-trunk. The build.xml contain
>         injectfaults
>         target and several other related things. I was able to build those
>         targets (injectfaults, run-test-hdfs-fault-inject, etc). Up to this
>         point, I stucked because I found no scripted that help me to
>         start-dfs,
>         stop-dfs...
>         I copied the bin folder from common/core to HDFS project folder
>         and ran
>         the  script:
>
>         /bin/start-dfs.sh/ <http://start-dfs.sh/>
>
>         but there is exception:
>
>         /Exception in thread
>         main"Java.lang.NoClassDefFoundError
>         : org/apache/commons/logging/LogFactory
>         /
>         I guess the reason is I ran HDFS without any common class. How I get
>         around this?
>
>         3) I also tried the third way, by download the hadoop release
>         (contain
>         everything: core, hdfs, mapred), and used Eclipse to create
>         project from
>         existing code. I was able to build this project. The bin scripts
>         worked
>         well but I found know FI related classes. What I did was apply
>         the patch
>         (HADOOP-6003.patch) using Eclipse patch command (Team | apply
>         patch),
>         but I failed the patching procedure.
>
>         In summary, I would like to run a real HDFS with fault
>         injection. I am
>         not very familiar with ant. Could you please show me some more
>         details,
>         so that I could get around this?
>
>         On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan
>         <jhoman@yahoo-inc.com <ma...@yahoo-inc.com>
>         <mailto:jhoman@yahoo-inc.com <ma...@yahoo-inc.com>>> wrote:
>
>             Thanh-
>             If you would like the run execute the tests that have been
>             instrumented to use the fault injection framework the ant
>         target is
>             run-test-hdfs-fault-inject.  These were used extensively in the
>             recent append work and there are quite a few append-related
>         tests.
>               Was there something more specific you were looking for?
>
>             Thanks,
>             Jakob
>             Hadoop at Yahoo!
>
>
>             Thanh Do wrote:
>
>                 Hi everyone,
>
>                 Could any body so me how to run the fault injection
>         framework
>                 mentioned in the following links?:
>
>         http://issues.apache.org/jira/browse/HDFS-435
>
>                 and
>
>         https://issues.apache.org/jira/browse/HDFS-436
>
>                 Thanks,
>                 Thanh
>
>
>
>
>
>
>         --
>         T
>
>
>     --
>     With best regards,
>             Konstantin Boudnik (aka Cos)
>
>             Yahoo! Grid Computing
>             +1 (408) 349-4049
>
>     2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
>     Attention! Streams of consciousness are disallowed
>
>
>
>
> --
> thanh

Re: How to run Fault injection in HDFS

Posted by Thanh Do <th...@cs.wisc.edu>.
Sorry to dig this thread again!

I am expecting the release of 0.21 so that I don't have to manually play
around with AspectJ FI any more.

I still have problem with running HDFS with instrumented code (with aspect).

Here is what I did:

In the root directory of HDFS:
*$ ant injectfaults

$ ant jar-fault-inject
*At this point, i have a jar file containing hdfs classed, namely, *
hadoop-hdfs-0.22.0-dev-fi.jar*, located in *build-fi* folder.

Now I go to the HADOOP folder (which contains running script in bin
directory), and do the following
*$ ant compile-core-classes*
( now I need additional hdfs classes to be able to run *start-dfs.sh*,
right)
What I did is copying *$HDFS/build-fi/hadoop-hdfs-0.22.0-dev-fi.jar *to *
$HADOOP/hadoop-hdfs-fi-core.jar* (I need to add suffix "core" since the
script will include all hadoop-*-core.jar in classpath)

*$ bin/start-dfs.sh*
and got error message:

2009-11-19 11:52:57,479 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.NoSuchMethodError:
org.apache.hadoop.conf.Configuration.addDeprecation(Ljava/lang/String;[Ljava/lang/String;)V
        at
org.apache.hadoop.hdfs.HdfsConfiguration.deprecate(HdfsConfiguration.java:44)
        at
org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:48)
        at
org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:28)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1199)

2009-11-19 11:52:57,480 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

Could any one tell me how to solve this problem?

Thank you so much.


On Thu, Oct 8, 2009 at 10:41 AM, Konstantin Boudnik <co...@yahoo-inc.com>wrote:

> Thanks for looking into fault injection - it's very interesting and useful
> technique based on AspectJ.
>
> Currently, it is fully integrated into HDFS only. There's a JIRA
> (HADOOP-6204) which tracks the same effort for Common and then all Hadoop's
> components will have injection (as well as fault injection) in place. This
> JIRA should be committed in the matter of a couple of weeks.
>
> For the immediate purpose you don't need to patch anything or do any
> tweaking of the code: the fault injection framework is in already and ready
> to work.
>
> For your current needs: to be able to run HDFS with instrumented code you
> need to run a special build. To do so:
>  - % ant injectfaults - similar to a 'normal' build, but does instrument
> the code with aspects located under src/test/aop/**
>  - % ant jar-fault-inject - similar to a 'normal' jar creation but
> instrumented
>  - % ant jar-test-fault-inject - similar to a 'normal' jar-test creation
> but instrumented
>
> Now, if you have the rest of sub-projects built you need to move the
> instrumented jar files on top of the 'normal' files in your installation
> directory. Please note that some renaming has to be done: injected jar files
> have '-fi' suffix in their names and normal jar files don't have such. Thus
> currently you'll have to rename those injected jars to pretend like they are
> normal, used by configured's classpath.
>
> At this point you all set: you have a production quality Hadoop with
> injected HDFS. As soon as the aforementioned JIRA is ready and committed
> we'd be able to provide Hadoop-injected version by the build's means rather
> than doing any renaming and manual intervention.
>
> Also, if you need to read more about fault injection (FI) in HDFS you can
> find FI-framework documentation in the current HDFS trunk (it isn't on the
> web yet for version 0.21 hasn't been released yet). Because building
> documentation requires some extra effort and additional software to be
> installed, you can simply download and read the PDF from this FI-framework
> JIRA
>
>
> https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf
>
> Hope it helps,
>  Cos
>
>
> On 10/8/09 8:10 AM, Thanh Do wrote:
>
>> Thank you so much, Jakob.
>>
>> Could you please explain the fault injection running procedure in details?
>>
>> My goal is running HDFS in a cluster (with a namenode and several
>> datanode), and see how fault injection techniques affect HDFS
>> behavior's. Also, I would like to define some new aspects/fault to test
>> the system.
>>
>> What I did was:
>> 1) I checked out the hadoop-common-trunk, but this package doesn't
>> contain HDFS classes. I finally noticed that FI framework is currently
>> integrated with HDFS only.
>>
>> 2) So, I checked out the hdfs-trunk. The build.xml contain injectfaults
>> target and several other related things. I was able to build those
>> targets (injectfaults, run-test-hdfs-fault-inject, etc). Up to this
>> point, I stucked because I found no scripted that help me to start-dfs,
>> stop-dfs...
>> I copied the bin folder from common/core to HDFS project folder and ran
>> the  script:
>>
>> /bin/start-dfs.sh/
>>
>> but there is exception:
>>
>> /Exception in thread
>> main"Java.lang.NoClassDefFoundError
>> : org/apache/commons/logging/LogFactory
>> /
>> I guess the reason is I ran HDFS without any common class. How I get
>> around this?
>>
>> 3) I also tried the third way, by download the hadoop release (contain
>> everything: core, hdfs, mapred), and used Eclipse to create project from
>> existing code. I was able to build this project. The bin scripts worked
>> well but I found know FI related classes. What I did was apply the patch
>> (HADOOP-6003.patch) using Eclipse patch command (Team | apply patch),
>> but I failed the patching procedure.
>>
>> In summary, I would like to run a real HDFS with fault injection. I am
>> not very familiar with ant. Could you please show me some more details,
>> so that I could get around this?
>>
>> On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan <jhoman@yahoo-inc.com
>> <ma...@yahoo-inc.com>> wrote:
>>
>>    Thanh-
>>    If you would like the run execute the tests that have been
>>    instrumented to use the fault injection framework the ant target is
>>    run-test-hdfs-fault-inject.  These were used extensively in the
>>    recent append work and there are quite a few append-related tests.
>>      Was there something more specific you were looking for?
>>
>>    Thanks,
>>    Jakob
>>    Hadoop at Yahoo!
>>
>>
>>    Thanh Do wrote:
>>
>>        Hi everyone,
>>
>>        Could any body so me how to run the fault injection framework
>>        mentioned in the following links?:
>>
>>        http://issues.apache.org/jira/browse/HDFS-435
>>
>>        and
>>
>>        https://issues.apache.org/jira/browse/HDFS-436
>>
>>        Thanks,
>>        Thanh
>>
>>
>>
>>
>>
>>
>> --
>> T
>>
>
> --
> With best regards,
>        Konstantin Boudnik (aka Cos)
>
>        Yahoo! Grid Computing
>        +1 (408) 349-4049
>
> 2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
> Attention! Streams of consciousness are disallowed
>
>


-- 
thanh

Re: How to run Fault injection in HDFS

Posted by Konstantin Boudnik <co...@yahoo-inc.com>.
Thanks for looking into fault injection - it's very interesting and useful 
technique based on AspectJ.

Currently, it is fully integrated into HDFS only. There's a JIRA (HADOOP-6204) 
which tracks the same effort for Common and then all Hadoop's components will 
have injection (as well as fault injection) in place. This JIRA should be 
committed in the matter of a couple of weeks.

For the immediate purpose you don't need to patch anything or do any tweaking of 
the code: the fault injection framework is in already and ready to work.

For your current needs: to be able to run HDFS with instrumented code you need 
to run a special build. To do so:
  - % ant injectfaults - similar to a 'normal' build, but does instrument the 
code with aspects located under src/test/aop/**
  - % ant jar-fault-inject - similar to a 'normal' jar creation but instrumented
  - % ant jar-test-fault-inject - similar to a 'normal' jar-test creation but 
instrumented

Now, if you have the rest of sub-projects built you need to move the 
instrumented jar files on top of the 'normal' files in your installation 
directory. Please note that some renaming has to be done: injected jar files 
have '-fi' suffix in their names and normal jar files don't have such. Thus 
currently you'll have to rename those injected jars to pretend like they are 
normal, used by configured's classpath.

At this point you all set: you have a production quality Hadoop with injected 
HDFS. As soon as the aforementioned JIRA is ready and committed we'd be able to 
provide Hadoop-injected version by the build's means rather than doing any 
renaming and manual intervention.

Also, if you need to read more about fault injection (FI) in HDFS you can find 
FI-framework documentation in the current HDFS trunk (it isn't on the web yet 
for version 0.21 hasn't been released yet). Because building documentation 
requires some extra effort and additional software to be installed, you can 
simply download and read the PDF from this FI-framework JIRA

https://issues.apache.org/jira/secure/attachment/12414225/Fault+injection+development+guide+and+Framework+HowTo.pdf

Hope it helps,
   Cos

On 10/8/09 8:10 AM, Thanh Do wrote:
> Thank you so much, Jakob.
>
> Could you please explain the fault injection running procedure in details?
>
> My goal is running HDFS in a cluster (with a namenode and several
> datanode), and see how fault injection techniques affect HDFS
> behavior's. Also, I would like to define some new aspects/fault to test
> the system.
>
> What I did was:
> 1) I checked out the hadoop-common-trunk, but this package doesn't
> contain HDFS classes. I finally noticed that FI framework is currently
> integrated with HDFS only.
>
> 2) So, I checked out the hdfs-trunk. The build.xml contain injectfaults
> target and several other related things. I was able to build those
> targets (injectfaults, run-test-hdfs-fault-inject, etc). Up to this
> point, I stucked because I found no scripted that help me to start-dfs,
> stop-dfs...
> I copied the bin folder from common/core to HDFS project folder and ran
> the  script:
>
> /bin/start-dfs.sh/
>
> but there is exception:
>
> /Exception in thread
> main"Java.lang.NoClassDefFoundError
> : org/apache/commons/logging/LogFactory
> /
> I guess the reason is I ran HDFS without any common class. How I get
> around this?
>
> 3) I also tried the third way, by download the hadoop release (contain
> everything: core, hdfs, mapred), and used Eclipse to create project from
> existing code. I was able to build this project. The bin scripts worked
> well but I found know FI related classes. What I did was apply the patch
> (HADOOP-6003.patch) using Eclipse patch command (Team | apply patch),
> but I failed the patching procedure.
>
> In summary, I would like to run a real HDFS with fault injection. I am
> not very familiar with ant. Could you please show me some more details,
> so that I could get around this?
>
> On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan <jhoman@yahoo-inc.com
> <ma...@yahoo-inc.com>> wrote:
>
>     Thanh-
>     If you would like the run execute the tests that have been
>     instrumented to use the fault injection framework the ant target is
>     run-test-hdfs-fault-inject.  These were used extensively in the
>     recent append work and there are quite a few append-related tests.
>       Was there something more specific you were looking for?
>
>     Thanks,
>     Jakob
>     Hadoop at Yahoo!
>
>
>     Thanh Do wrote:
>
>         Hi everyone,
>
>         Could any body so me how to run the fault injection framework
>         mentioned in the following links?:
>
>         http://issues.apache.org/jira/browse/HDFS-435
>
>         and
>
>         https://issues.apache.org/jira/browse/HDFS-436
>
>         Thanks,
>         Thanh
>
>
>
>
>
>
> --
> T

-- 
With best regards,
	Konstantin Boudnik (aka Cos)

         Yahoo! Grid Computing
         +1 (408) 349-4049

2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
Attention! Streams of consciousness are disallowed


Re: How to run Fault injection in HDFS

Posted by Thanh Do <th...@cs.wisc.edu>.
Thank you so much, Jakob.

Could you please explain the fault injection running procedure in details?

My goal is running HDFS in a cluster (with a namenode and several datanode),
and see how fault injection techniques affect HDFS behavior's. Also, I would
like to define some new aspects/fault to test the system.

What I did was:
1) I checked out the hadoop-common-trunk, but this package doesn't contain
HDFS classes. I finally noticed that FI framework is currently integrated
with HDFS only.

2) So, I checked out the hdfs-trunk. The build.xml contain injectfaults
target and several other related things. I was able to build those targets
(injectfaults, run-test-hdfs-fault-inject, etc). Up to this point, I stucked
because I found no scripted that help me to start-dfs, stop-dfs...
I copied the bin folder from common/core to HDFS project folder and ran the
script:

*bin/start-dfs.sh*

but there is exception:

*Exception in thread
main"Java.lang.NoClassDefFoundError
: org/apache/commons/logging/LogFactory
*
I guess the reason is I ran HDFS without any common class. How I get around
this?

3) I also tried the third way, by download the hadoop release (contain
everything: core, hdfs, mapred), and used Eclipse to create project from
existing code. I was able to build this project. The bin scripts worked well
but I found know FI related classes. What I did was apply the patch
(HADOOP-6003.patch) using Eclipse patch command (Team | apply patch), but I
failed the patching procedure.

In summary, I would like to run a real HDFS with fault injection. I am not
very familiar with ant. Could you please show me some more details, so that
I could get around this?

On Thu, Oct 8, 2009 at 12:19 AM, Jakob Homan <jh...@yahoo-inc.com> wrote:

> Thanh-
> If you would like the run execute the tests that have been instrumented to
> use the fault injection framework the ant target is
> run-test-hdfs-fault-inject.  These were used extensively in the recent
> append work and there are quite a few append-related tests.  Was there
> something more specific you were looking for?
>
> Thanks,
> Jakob
> Hadoop at Yahoo!
>
>
> Thanh Do wrote:
>
>> Hi everyone,
>>
>> Could any body so me how to run the fault injection framework mentioned in
>> the following links?:
>>
>> http://issues.apache.org/jira/browse/HDFS-435
>>
>> and
>>
>> https://issues.apache.org/jira/browse/HDFS-436
>>
>> Thanks,
>> Thanh
>>
>>
>>
>


-- 
T

Re: How to run Fault injection in HDFS

Posted by Jakob Homan <jh...@yahoo-inc.com>.
Thanh-
If you would like the run execute the tests that have been instrumented 
to use the fault injection framework the ant target is 
run-test-hdfs-fault-inject.  These were used extensively in the recent 
append work and there are quite a few append-related tests.  Was there 
something more specific you were looking for?

Thanks,
Jakob
Hadoop at Yahoo!

Thanh Do wrote:
> Hi everyone,
> 
> Could any body so me how to run the fault injection framework mentioned 
> in the following links?:
> 
> http://issues.apache.org/jira/browse/HDFS-435
> 
> and
> 
> https://issues.apache.org/jira/browse/HDFS-436
> 
> Thanks,
> Thanh
> 
>