You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@giraph.apache.org by Roman Shaposhnik <ro...@shaposhnik.org> on 2014/06/30 02:12:31 UTC
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <to...@toshiba.co.jp> wrote:
> Hi Roman.
>
> Thanks for the reply.
>
> OK, I'll try hadoop_1 and hadoop_2 with the latest
> release-1.1.0-RC0 and report the result.
That would be extremely helpful!
And speaking of which -- I'd like to remind folks
that taking RC0 for a spin would really help
at this point. If we ever want to have 1.1.0 out
we need the required PMC votes.
Thanks,
Roman.
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Toshio ITO <to...@toshiba.co.jp>.
Hi Akila,
Thank you for the pointer.
I think this is it. I got the following error messages.
14/07/08 13:42:36 INFO http.HttpServer: Jetty bound to port 60030
14/07/08 13:42:36 INFO mortbay.log: jetty-6.1.26
14/07/08 13:42:36 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
14/07/08 13:42:36 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 60030
14/07/08 13:42:36 INFO regionserver.HRegionServer: STOPPED: Failed initialization
14/07/08 13:42:36 ERROR regionserver.HRegionServer: Failed init
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:562)
at org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1292)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:890)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:187)
at org.apache.hadoop.hbase.regionserver.HRegionServer.tryReportForDuty(HRegionServer.java:1509)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:568)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:213)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:163)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1042)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
at org.apache.hadoop.hbase.security.User.call(User.java:457)
at org.apache.hadoop.hbase.security.User.access$500(User.java:49)
at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:346)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:195)
at java.lang.Thread.run(Thread.java:744)
(snip)
14/07/08 13:42:36 FATAL regionserver.HRegionServer: ABORTING region server serverName=localhost,34825,1404794553051, load=(requests=0, regions=0, usedHeap=110, maxHeap=21465): Unhandled exception: Address already in use
java.net.BindException: Address already in use
(snip)
14/07/08 13:42:39 WARN master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to serverName=localhost,34825,1404794553051, load=(requests=0, regions=0, usedHeap=0, maxHeap=0), trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to /127.0.0.1:34825 after attempts=1
The last message repeated 10 times.
>
> [1 <text/plain; ISO-8859-1 (7bit)>]
> Hi Toshio, Roman,
>
> The HBase I/O test failure (no 3) happens may be due to this issue
>
> https://issues.apache.org/jira/browse/GIRAPH-926
>
> Toshio, Can you check whether you get an error similar to this?
>
> 14/07/07 15:53:58 INFO hbase.metrics: new MBeanInfo
> 14/07/07 15:53:58 INFO metrics.RegionServerMetrics: Initialized
> 14/07/07 15:53:58 INFO http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 14/07/07 15:53:58 INFO http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
> the listener on 60030
> 14/07/07 15:53:58 WARN regionserver.HRegionServer: Exception in region
> server :
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:444)
> at sun.nio.ch.Net.bind(Net.java:436)
> at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
> at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1760)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1715)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1108)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:122)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:752)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:148)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:101)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:132)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1172)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:624)
> at org.apache.hadoop.hbase.security.User.access$600(User.java:52)
> at
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:464)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:130)
> at java.lang.Thread.run(Thread.java:724)
> 14/07/07 15:53:58 INFO regionserver.HRegionServer: STOPPED: Failed
> initialization
> 14/07/07 15:53:58 ERROR regionserver.HRegionServer: Failed init
> java.net.BindException: Address already in use
>
> Thanks,
>
> Akila
>
>
> On Wed, Jul 2, 2014 at 12:10 AM, Roman Shaposhnik <ro...@shaposhnik.org>
> wrote:
>
> > Yes, the failures around Accumulo in hadoop_2 profile are expected and
> > nothing
> > to worry about. I should've probably mentioned it in my RC announcement
> > email.
> > Sorry about that.
> >
> > Any failures in hadoop_1 profile would be a reason to reconsider RC0.
> >
> > Thanks,
> > Roman.
> >
> > P.S. This is one of the reasons we're still running with hadoop_1 as a
> > default
> > profile.
> >
> > On Mon, Jun 30, 2014 at 3:09 AM, Akila Wajirasena
> > <ak...@gmail.com> wrote:
> > > Hi Roman,
> > >
> > > I got the same error when running hadoop_2 profile.
> > > According to this [1] the Accumulo version we use in giraph (1.4) is not
> > > compatible with Hadoop 2.
> > > I think this is the issue.
> > >
> > > [1]
> > >
> > http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
> > >
> > > Thanks
> > >
> > > Akila
> > >
> > >
> > > On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
> > > wrote:
> > >>
> > >> Hi Roman.
> > >>
> > >> I checked out release-1.1.0-RC0 and succeeded to build it.
> > >>
> > >> $ git checkout release-1.1.0-RC0
> > >> $ mvn clean
> > >> $ mvn package -Phadoop_2 -DskipTests
> > >> ## SUCCESS
> > >>
> > >> However, when I ran the tests with LocalJobRunner, it failed.
> > >>
> > >> $ mvn clean
> > >> $ mvn package -Phadoop_2
> > >>
> > >> It passed tests from "Core" and "Examples", but it failed at
> > >> "Accumulo I/O".
> > >>
> > >>
> > >>
> > testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
> > >>
> > >> The error log contained the following exception
> > >>
> > >> java.lang.IncompatibleClassChangeError: Found interface
> > >> org.apache.hadoop.mapreduce.JobContext, but class was expected
> > >>
> > >>
> > >> Next I wanted to run the tests with a running Hadoop2 instance, but
> > >> I'm having trouble to set it up (I'm quite new to Hadoop).
> > >>
> > >> Could you show me some example configuration (etc/hadoop/* files) of
> > >> Hadoop 2.2.0 single-node cluster? That would be very helpful.
> > >>
> > >>
> > >>
> > >>
> > >> >
> > >> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <
> > toshio9.ito@toshiba.co.jp>
> > >> > wrote:
> > >> > > Hi Roman.
> > >> > >
> > >> > > Thanks for the reply.
> > >> > >
> > >> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
> > >> > > release-1.1.0-RC0 and report the result.
> > >> >
> > >> > That would be extremely helpful!
> > >> >
> > >> > And speaking of which -- I'd like to remind folks
> > >> > that taking RC0 for a spin would really help
> > >> > at this point. If we ever want to have 1.1.0 out
> > >> > we need the required PMC votes.
> > >> >
> > >> > Thanks,
> > >> > Roman.
> > >> ------------------------------------
> > >> Toshio Ito
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> >
>
>
>
> --
> Regards
> Akila Wajirasena
> [2 <text/html; ISO-8859-1 (quoted-printable)>]
>
------------------------------------
Toshio Ito
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Toshio ITO <to...@toshiba.co.jp>.
Hi Akila,
Thank you for the pointer.
I think this is it. I got the following error messages.
14/07/08 13:42:36 INFO http.HttpServer: Jetty bound to port 60030
14/07/08 13:42:36 INFO mortbay.log: jetty-6.1.26
14/07/08 13:42:36 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
14/07/08 13:42:36 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 60030
14/07/08 13:42:36 INFO regionserver.HRegionServer: STOPPED: Failed initialization
14/07/08 13:42:36 ERROR regionserver.HRegionServer: Failed init
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:562)
at org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1292)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:890)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:187)
at org.apache.hadoop.hbase.regionserver.HRegionServer.tryReportForDuty(HRegionServer.java:1509)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:568)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:213)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:163)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1042)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
at org.apache.hadoop.hbase.security.User.call(User.java:457)
at org.apache.hadoop.hbase.security.User.access$500(User.java:49)
at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:346)
at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:195)
at java.lang.Thread.run(Thread.java:744)
(snip)
14/07/08 13:42:36 FATAL regionserver.HRegionServer: ABORTING region server serverName=localhost,34825,1404794553051, load=(requests=0, regions=0, usedHeap=110, maxHeap=21465): Unhandled exception: Address already in use
java.net.BindException: Address already in use
(snip)
14/07/08 13:42:39 WARN master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to serverName=localhost,34825,1404794553051, load=(requests=0, regions=0, usedHeap=0, maxHeap=0), trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to /127.0.0.1:34825 after attempts=1
The last message repeated 10 times.
>
> [1 <text/plain; ISO-8859-1 (7bit)>]
> Hi Toshio, Roman,
>
> The HBase I/O test failure (no 3) happens may be due to this issue
>
> https://issues.apache.org/jira/browse/GIRAPH-926
>
> Toshio, Can you check whether you get an error similar to this?
>
> 14/07/07 15:53:58 INFO hbase.metrics: new MBeanInfo
> 14/07/07 15:53:58 INFO metrics.RegionServerMetrics: Initialized
> 14/07/07 15:53:58 INFO http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 14/07/07 15:53:58 INFO http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
> the listener on 60030
> 14/07/07 15:53:58 WARN regionserver.HRegionServer: Exception in region
> server :
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:444)
> at sun.nio.ch.Net.bind(Net.java:436)
> at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
> at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1760)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1715)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1108)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:122)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:752)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:148)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:101)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:132)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1172)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:624)
> at org.apache.hadoop.hbase.security.User.access$600(User.java:52)
> at
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:464)
> at
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:130)
> at java.lang.Thread.run(Thread.java:724)
> 14/07/07 15:53:58 INFO regionserver.HRegionServer: STOPPED: Failed
> initialization
> 14/07/07 15:53:58 ERROR regionserver.HRegionServer: Failed init
> java.net.BindException: Address already in use
>
> Thanks,
>
> Akila
>
>
> On Wed, Jul 2, 2014 at 12:10 AM, Roman Shaposhnik <ro...@shaposhnik.org>
> wrote:
>
> > Yes, the failures around Accumulo in hadoop_2 profile are expected and
> > nothing
> > to worry about. I should've probably mentioned it in my RC announcement
> > email.
> > Sorry about that.
> >
> > Any failures in hadoop_1 profile would be a reason to reconsider RC0.
> >
> > Thanks,
> > Roman.
> >
> > P.S. This is one of the reasons we're still running with hadoop_1 as a
> > default
> > profile.
> >
> > On Mon, Jun 30, 2014 at 3:09 AM, Akila Wajirasena
> > <ak...@gmail.com> wrote:
> > > Hi Roman,
> > >
> > > I got the same error when running hadoop_2 profile.
> > > According to this [1] the Accumulo version we use in giraph (1.4) is not
> > > compatible with Hadoop 2.
> > > I think this is the issue.
> > >
> > > [1]
> > >
> > http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
> > >
> > > Thanks
> > >
> > > Akila
> > >
> > >
> > > On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
> > > wrote:
> > >>
> > >> Hi Roman.
> > >>
> > >> I checked out release-1.1.0-RC0 and succeeded to build it.
> > >>
> > >> $ git checkout release-1.1.0-RC0
> > >> $ mvn clean
> > >> $ mvn package -Phadoop_2 -DskipTests
> > >> ## SUCCESS
> > >>
> > >> However, when I ran the tests with LocalJobRunner, it failed.
> > >>
> > >> $ mvn clean
> > >> $ mvn package -Phadoop_2
> > >>
> > >> It passed tests from "Core" and "Examples", but it failed at
> > >> "Accumulo I/O".
> > >>
> > >>
> > >>
> > testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
> > >>
> > >> The error log contained the following exception
> > >>
> > >> java.lang.IncompatibleClassChangeError: Found interface
> > >> org.apache.hadoop.mapreduce.JobContext, but class was expected
> > >>
> > >>
> > >> Next I wanted to run the tests with a running Hadoop2 instance, but
> > >> I'm having trouble to set it up (I'm quite new to Hadoop).
> > >>
> > >> Could you show me some example configuration (etc/hadoop/* files) of
> > >> Hadoop 2.2.0 single-node cluster? That would be very helpful.
> > >>
> > >>
> > >>
> > >>
> > >> >
> > >> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <
> > toshio9.ito@toshiba.co.jp>
> > >> > wrote:
> > >> > > Hi Roman.
> > >> > >
> > >> > > Thanks for the reply.
> > >> > >
> > >> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
> > >> > > release-1.1.0-RC0 and report the result.
> > >> >
> > >> > That would be extremely helpful!
> > >> >
> > >> > And speaking of which -- I'd like to remind folks
> > >> > that taking RC0 for a spin would really help
> > >> > at this point. If we ever want to have 1.1.0 out
> > >> > we need the required PMC votes.
> > >> >
> > >> > Thanks,
> > >> > Roman.
> > >> ------------------------------------
> > >> Toshio Ito
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> >
>
>
>
> --
> Regards
> Akila Wajirasena
> [2 <text/html; ISO-8859-1 (quoted-printable)>]
>
------------------------------------
Toshio Ito
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Akila Wajirasena <ak...@gmail.com>.
Hi Toshio, Roman,
The HBase I/O test failure (no 3) happens may be due to this issue
https://issues.apache.org/jira/browse/GIRAPH-926
Toshio, Can you check whether you get an error similar to this?
14/07/07 15:53:58 INFO hbase.metrics: new MBeanInfo
14/07/07 15:53:58 INFO metrics.RegionServerMetrics: Initialized
14/07/07 15:53:58 INFO http.HttpServer: Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
14/07/07 15:53:58 INFO http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
the listener on 60030
14/07/07 15:53:58 WARN regionserver.HRegionServer: Exception in region
server :
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1760)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1715)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1108)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:122)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:752)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:148)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:101)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:132)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
at org.apache.hadoop.hbase.security.User.call(User.java:624)
at org.apache.hadoop.hbase.security.User.access$600(User.java:52)
at
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:464)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:130)
at java.lang.Thread.run(Thread.java:724)
14/07/07 15:53:58 INFO regionserver.HRegionServer: STOPPED: Failed
initialization
14/07/07 15:53:58 ERROR regionserver.HRegionServer: Failed init
java.net.BindException: Address already in use
Thanks,
Akila
On Wed, Jul 2, 2014 at 12:10 AM, Roman Shaposhnik <ro...@shaposhnik.org>
wrote:
> Yes, the failures around Accumulo in hadoop_2 profile are expected and
> nothing
> to worry about. I should've probably mentioned it in my RC announcement
> email.
> Sorry about that.
>
> Any failures in hadoop_1 profile would be a reason to reconsider RC0.
>
> Thanks,
> Roman.
>
> P.S. This is one of the reasons we're still running with hadoop_1 as a
> default
> profile.
>
> On Mon, Jun 30, 2014 at 3:09 AM, Akila Wajirasena
> <ak...@gmail.com> wrote:
> > Hi Roman,
> >
> > I got the same error when running hadoop_2 profile.
> > According to this [1] the Accumulo version we use in giraph (1.4) is not
> > compatible with Hadoop 2.
> > I think this is the issue.
> >
> > [1]
> >
> http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
> >
> > Thanks
> >
> > Akila
> >
> >
> > On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
> > wrote:
> >>
> >> Hi Roman.
> >>
> >> I checked out release-1.1.0-RC0 and succeeded to build it.
> >>
> >> $ git checkout release-1.1.0-RC0
> >> $ mvn clean
> >> $ mvn package -Phadoop_2 -DskipTests
> >> ## SUCCESS
> >>
> >> However, when I ran the tests with LocalJobRunner, it failed.
> >>
> >> $ mvn clean
> >> $ mvn package -Phadoop_2
> >>
> >> It passed tests from "Core" and "Examples", but it failed at
> >> "Accumulo I/O".
> >>
> >>
> >>
> testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
> >>
> >> The error log contained the following exception
> >>
> >> java.lang.IncompatibleClassChangeError: Found interface
> >> org.apache.hadoop.mapreduce.JobContext, but class was expected
> >>
> >>
> >> Next I wanted to run the tests with a running Hadoop2 instance, but
> >> I'm having trouble to set it up (I'm quite new to Hadoop).
> >>
> >> Could you show me some example configuration (etc/hadoop/* files) of
> >> Hadoop 2.2.0 single-node cluster? That would be very helpful.
> >>
> >>
> >>
> >>
> >> >
> >> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <
> toshio9.ito@toshiba.co.jp>
> >> > wrote:
> >> > > Hi Roman.
> >> > >
> >> > > Thanks for the reply.
> >> > >
> >> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
> >> > > release-1.1.0-RC0 and report the result.
> >> >
> >> > That would be extremely helpful!
> >> >
> >> > And speaking of which -- I'd like to remind folks
> >> > that taking RC0 for a spin would really help
> >> > at this point. If we ever want to have 1.1.0 out
> >> > we need the required PMC votes.
> >> >
> >> > Thanks,
> >> > Roman.
> >> ------------------------------------
> >> Toshio Ito
> >
> >
> >
> >
> >
> >
> >
>
--
Regards
Akila Wajirasena
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Akila Wajirasena <ak...@gmail.com>.
Hi Toshio, Roman,
The HBase I/O test failure (no 3) happens may be due to this issue
https://issues.apache.org/jira/browse/GIRAPH-926
Toshio, Can you check whether you get an error similar to this?
14/07/07 15:53:58 INFO hbase.metrics: new MBeanInfo
14/07/07 15:53:58 INFO metrics.RegionServerMetrics: Initialized
14/07/07 15:53:58 INFO http.HttpServer: Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
14/07/07 15:53:58 INFO http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
the listener on 60030
14/07/07 15:53:58 WARN regionserver.HRegionServer: Exception in region
server :
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1760)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1715)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1108)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:122)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:752)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:148)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:101)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:132)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
at org.apache.hadoop.hbase.security.User.call(User.java:624)
at org.apache.hadoop.hbase.security.User.access$600(User.java:52)
at
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:464)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:130)
at java.lang.Thread.run(Thread.java:724)
14/07/07 15:53:58 INFO regionserver.HRegionServer: STOPPED: Failed
initialization
14/07/07 15:53:58 ERROR regionserver.HRegionServer: Failed init
java.net.BindException: Address already in use
Thanks,
Akila
On Wed, Jul 2, 2014 at 12:10 AM, Roman Shaposhnik <ro...@shaposhnik.org>
wrote:
> Yes, the failures around Accumulo in hadoop_2 profile are expected and
> nothing
> to worry about. I should've probably mentioned it in my RC announcement
> email.
> Sorry about that.
>
> Any failures in hadoop_1 profile would be a reason to reconsider RC0.
>
> Thanks,
> Roman.
>
> P.S. This is one of the reasons we're still running with hadoop_1 as a
> default
> profile.
>
> On Mon, Jun 30, 2014 at 3:09 AM, Akila Wajirasena
> <ak...@gmail.com> wrote:
> > Hi Roman,
> >
> > I got the same error when running hadoop_2 profile.
> > According to this [1] the Accumulo version we use in giraph (1.4) is not
> > compatible with Hadoop 2.
> > I think this is the issue.
> >
> > [1]
> >
> http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
> >
> > Thanks
> >
> > Akila
> >
> >
> > On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
> > wrote:
> >>
> >> Hi Roman.
> >>
> >> I checked out release-1.1.0-RC0 and succeeded to build it.
> >>
> >> $ git checkout release-1.1.0-RC0
> >> $ mvn clean
> >> $ mvn package -Phadoop_2 -DskipTests
> >> ## SUCCESS
> >>
> >> However, when I ran the tests with LocalJobRunner, it failed.
> >>
> >> $ mvn clean
> >> $ mvn package -Phadoop_2
> >>
> >> It passed tests from "Core" and "Examples", but it failed at
> >> "Accumulo I/O".
> >>
> >>
> >>
> testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
> >>
> >> The error log contained the following exception
> >>
> >> java.lang.IncompatibleClassChangeError: Found interface
> >> org.apache.hadoop.mapreduce.JobContext, but class was expected
> >>
> >>
> >> Next I wanted to run the tests with a running Hadoop2 instance, but
> >> I'm having trouble to set it up (I'm quite new to Hadoop).
> >>
> >> Could you show me some example configuration (etc/hadoop/* files) of
> >> Hadoop 2.2.0 single-node cluster? That would be very helpful.
> >>
> >>
> >>
> >>
> >> >
> >> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <
> toshio9.ito@toshiba.co.jp>
> >> > wrote:
> >> > > Hi Roman.
> >> > >
> >> > > Thanks for the reply.
> >> > >
> >> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
> >> > > release-1.1.0-RC0 and report the result.
> >> >
> >> > That would be extremely helpful!
> >> >
> >> > And speaking of which -- I'd like to remind folks
> >> > that taking RC0 for a spin would really help
> >> > at this point. If we ever want to have 1.1.0 out
> >> > we need the required PMC votes.
> >> >
> >> > Thanks,
> >> > Roman.
> >> ------------------------------------
> >> Toshio Ito
> >
> >
> >
> >
> >
> >
> >
>
--
Regards
Akila Wajirasena
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Roman Shaposhnik <ro...@shaposhnik.org>.
Yes, the failures around Accumulo in hadoop_2 profile are expected and nothing
to worry about. I should've probably mentioned it in my RC announcement email.
Sorry about that.
Any failures in hadoop_1 profile would be a reason to reconsider RC0.
Thanks,
Roman.
P.S. This is one of the reasons we're still running with hadoop_1 as a default
profile.
On Mon, Jun 30, 2014 at 3:09 AM, Akila Wajirasena
<ak...@gmail.com> wrote:
> Hi Roman,
>
> I got the same error when running hadoop_2 profile.
> According to this [1] the Accumulo version we use in giraph (1.4) is not
> compatible with Hadoop 2.
> I think this is the issue.
>
> [1]
> http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
>
> Thanks
>
> Akila
>
>
> On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
> wrote:
>>
>> Hi Roman.
>>
>> I checked out release-1.1.0-RC0 and succeeded to build it.
>>
>> $ git checkout release-1.1.0-RC0
>> $ mvn clean
>> $ mvn package -Phadoop_2 -DskipTests
>> ## SUCCESS
>>
>> However, when I ran the tests with LocalJobRunner, it failed.
>>
>> $ mvn clean
>> $ mvn package -Phadoop_2
>>
>> It passed tests from "Core" and "Examples", but it failed at
>> "Accumulo I/O".
>>
>>
>> testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
>>
>> The error log contained the following exception
>>
>> java.lang.IncompatibleClassChangeError: Found interface
>> org.apache.hadoop.mapreduce.JobContext, but class was expected
>>
>>
>> Next I wanted to run the tests with a running Hadoop2 instance, but
>> I'm having trouble to set it up (I'm quite new to Hadoop).
>>
>> Could you show me some example configuration (etc/hadoop/* files) of
>> Hadoop 2.2.0 single-node cluster? That would be very helpful.
>>
>>
>>
>>
>> >
>> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <to...@toshiba.co.jp>
>> > wrote:
>> > > Hi Roman.
>> > >
>> > > Thanks for the reply.
>> > >
>> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
>> > > release-1.1.0-RC0 and report the result.
>> >
>> > That would be extremely helpful!
>> >
>> > And speaking of which -- I'd like to remind folks
>> > that taking RC0 for a spin would really help
>> > at this point. If we ever want to have 1.1.0 out
>> > we need the required PMC votes.
>> >
>> > Thanks,
>> > Roman.
>> ------------------------------------
>> Toshio Ito
>
>
>
>
>
>
>
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Roman Shaposhnik <ro...@shaposhnik.org>.
Yes, the failures around Accumulo in hadoop_2 profile are expected and nothing
to worry about. I should've probably mentioned it in my RC announcement email.
Sorry about that.
Any failures in hadoop_1 profile would be a reason to reconsider RC0.
Thanks,
Roman.
P.S. This is one of the reasons we're still running with hadoop_1 as a default
profile.
On Mon, Jun 30, 2014 at 3:09 AM, Akila Wajirasena
<ak...@gmail.com> wrote:
> Hi Roman,
>
> I got the same error when running hadoop_2 profile.
> According to this [1] the Accumulo version we use in giraph (1.4) is not
> compatible with Hadoop 2.
> I think this is the issue.
>
> [1]
> http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
>
> Thanks
>
> Akila
>
>
> On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
> wrote:
>>
>> Hi Roman.
>>
>> I checked out release-1.1.0-RC0 and succeeded to build it.
>>
>> $ git checkout release-1.1.0-RC0
>> $ mvn clean
>> $ mvn package -Phadoop_2 -DskipTests
>> ## SUCCESS
>>
>> However, when I ran the tests with LocalJobRunner, it failed.
>>
>> $ mvn clean
>> $ mvn package -Phadoop_2
>>
>> It passed tests from "Core" and "Examples", but it failed at
>> "Accumulo I/O".
>>
>>
>> testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
>>
>> The error log contained the following exception
>>
>> java.lang.IncompatibleClassChangeError: Found interface
>> org.apache.hadoop.mapreduce.JobContext, but class was expected
>>
>>
>> Next I wanted to run the tests with a running Hadoop2 instance, but
>> I'm having trouble to set it up (I'm quite new to Hadoop).
>>
>> Could you show me some example configuration (etc/hadoop/* files) of
>> Hadoop 2.2.0 single-node cluster? That would be very helpful.
>>
>>
>>
>>
>> >
>> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <to...@toshiba.co.jp>
>> > wrote:
>> > > Hi Roman.
>> > >
>> > > Thanks for the reply.
>> > >
>> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
>> > > release-1.1.0-RC0 and report the result.
>> >
>> > That would be extremely helpful!
>> >
>> > And speaking of which -- I'd like to remind folks
>> > that taking RC0 for a spin would really help
>> > at this point. If we ever want to have 1.1.0 out
>> > we need the required PMC votes.
>> >
>> > Thanks,
>> > Roman.
>> ------------------------------------
>> Toshio Ito
>
>
>
>
>
>
>
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Akila Wajirasena <ak...@gmail.com>.
Hi Roman,
I got the same error when running hadoop_2 profile.
According to this [1] the Accumulo version we use in giraph (1.4) is not
compatible with Hadoop 2.
I think this is the issue.
[1]
http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
Thanks
Akila
On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
wrote:
> Hi Roman.
>
> I checked out release-1.1.0-RC0 and succeeded to build it.
>
> $ git checkout release-1.1.0-RC0
> $ mvn clean
> $ mvn package -Phadoop_2 -DskipTests
> ## SUCCESS
>
> However, when I ran the tests with LocalJobRunner, it failed.
>
> $ mvn clean
> $ mvn package -Phadoop_2
>
> It passed tests from "Core" and "Examples", but it failed at
> "Accumulo I/O".
>
>
> testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
>
> The error log contained the following exception
>
> java.lang.IncompatibleClassChangeError: Found interface
> org.apache.hadoop.mapreduce.JobContext, but class was expected
>
>
> Next I wanted to run the tests with a running Hadoop2 instance, but
> I'm having trouble to set it up (I'm quite new to Hadoop).
>
> Could you show me some example configuration (etc/hadoop/* files) of
> Hadoop 2.2.0 single-node cluster? That would be very helpful.
>
>
>
>
> >
> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <to...@toshiba.co.jp>
> wrote:
> > > Hi Roman.
> > >
> > > Thanks for the reply.
> > >
> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
> > > release-1.1.0-RC0 and report the result.
> >
> > That would be extremely helpful!
> >
> > And speaking of which -- I'd like to remind folks
> > that taking RC0 for a spin would really help
> > at this point. If we ever want to have 1.1.0 out
> > we need the required PMC votes.
> >
> > Thanks,
> > Roman.
> ------------------------------------
> Toshio Ito
>
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Akila Wajirasena <ak...@gmail.com>.
Hi Roman,
I got the same error when running hadoop_2 profile.
According to this [1] the Accumulo version we use in giraph (1.4) is not
compatible with Hadoop 2.
I think this is the issue.
[1]
http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html
Thanks
Akila
On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <to...@toshiba.co.jp>
wrote:
> Hi Roman.
>
> I checked out release-1.1.0-RC0 and succeeded to build it.
>
> $ git checkout release-1.1.0-RC0
> $ mvn clean
> $ mvn package -Phadoop_2 -DskipTests
> ## SUCCESS
>
> However, when I ran the tests with LocalJobRunner, it failed.
>
> $ mvn clean
> $ mvn package -Phadoop_2
>
> It passed tests from "Core" and "Examples", but it failed at
> "Accumulo I/O".
>
>
> testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
>
> The error log contained the following exception
>
> java.lang.IncompatibleClassChangeError: Found interface
> org.apache.hadoop.mapreduce.JobContext, but class was expected
>
>
> Next I wanted to run the tests with a running Hadoop2 instance, but
> I'm having trouble to set it up (I'm quite new to Hadoop).
>
> Could you show me some example configuration (etc/hadoop/* files) of
> Hadoop 2.2.0 single-node cluster? That would be very helpful.
>
>
>
>
> >
> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <to...@toshiba.co.jp>
> wrote:
> > > Hi Roman.
> > >
> > > Thanks for the reply.
> > >
> > > OK, I'll try hadoop_1 and hadoop_2 with the latest
> > > release-1.1.0-RC0 and report the result.
> >
> > That would be extremely helpful!
> >
> > And speaking of which -- I'd like to remind folks
> > that taking RC0 for a spin would really help
> > at this point. If we ever want to have 1.1.0 out
> > we need the required PMC votes.
> >
> > Thanks,
> > Roman.
> ------------------------------------
> Toshio Ito
>
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Toshio ITO <to...@toshiba.co.jp>.
Hi Roman.
I checked out release-1.1.0-RC0 and succeeded to build it.
$ git checkout release-1.1.0-RC0
$ mvn clean
$ mvn package -Phadoop_2 -DskipTests
## SUCCESS
However, when I ran the tests with LocalJobRunner, it failed.
$ mvn clean
$ mvn package -Phadoop_2
It passed tests from "Core" and "Examples", but it failed at
"Accumulo I/O".
testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
The error log contained the following exception
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
Next I wanted to run the tests with a running Hadoop2 instance, but
I'm having trouble to set it up (I'm quite new to Hadoop).
Could you show me some example configuration (etc/hadoop/* files) of
Hadoop 2.2.0 single-node cluster? That would be very helpful.
>
> On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <to...@toshiba.co.jp> wrote:
> > Hi Roman.
> >
> > Thanks for the reply.
> >
> > OK, I'll try hadoop_1 and hadoop_2 with the latest
> > release-1.1.0-RC0 and report the result.
>
> That would be extremely helpful!
>
> And speaking of which -- I'd like to remind folks
> that taking RC0 for a spin would really help
> at this point. If we ever want to have 1.1.0 out
> we need the required PMC votes.
>
> Thanks,
> Roman.
------------------------------------
Toshio Ito
Re: Giraph (1.1.0-SNAPSHOT and 1.0.0-RC3) unit tests fail
Posted by Toshio ITO <to...@toshiba.co.jp>.
Hi Roman.
I checked out release-1.1.0-RC0 and succeeded to build it.
$ git checkout release-1.1.0-RC0
$ mvn clean
$ mvn package -Phadoop_2 -DskipTests
## SUCCESS
However, when I ran the tests with LocalJobRunner, it failed.
$ mvn clean
$ mvn package -Phadoop_2
It passed tests from "Core" and "Examples", but it failed at
"Accumulo I/O".
testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat)
The error log contained the following exception
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
Next I wanted to run the tests with a running Hadoop2 instance, but
I'm having trouble to set it up (I'm quite new to Hadoop).
Could you show me some example configuration (etc/hadoop/* files) of
Hadoop 2.2.0 single-node cluster? That would be very helpful.
>
> On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO <to...@toshiba.co.jp> wrote:
> > Hi Roman.
> >
> > Thanks for the reply.
> >
> > OK, I'll try hadoop_1 and hadoop_2 with the latest
> > release-1.1.0-RC0 and report the result.
>
> That would be extremely helpful!
>
> And speaking of which -- I'd like to remind folks
> that taking RC0 for a spin would really help
> at this point. If we ever want to have 1.1.0 out
> we need the required PMC votes.
>
> Thanks,
> Roman.
------------------------------------
Toshio Ito