You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@giraph.apache.org by nishant gandhi <ni...@gmail.com> on 2014/03/31 18:39:34 UTC

why this messages?

Why this kind of error comes? What could be wrong? Is it related with
hadoop configuration or giraph code?


14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input format
specified.  Ensure your InputFormat does not require one.
14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format
specified . Ensure your OutputFormat does not require one.
14/03/31 15:47:30 INFO job.GiraphJob: run: Since checkpointing is disabled
(defa ult), do not allow any task retries (setting mapred.map.max.attempts
= 0, old va lue = 4)
14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL:
http://localhost:50030/ jobdetails.jsp?jobid=job_201403310811_0012
14/03/31 15:47:56 INFO
job.HaltApplicationUtils$DefaultHaltInstructionsWriter: w
riteHaltInstructions: To halt after next superstep execute:
'bin/halt-applicatio n --zkServer localhost:22181 --zkNode
/_hadoopBsp/job_201403310811_0012/_haltCom putation'
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version =3.4.5-1392090, built on 09/30/2012 17:52 GMT
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:host.name=localho
st
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.7. 0_21
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Oracl e Corporation
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:java.home=/usr/li b/jvm/java-7-openjdk-amd64/jre
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=/
usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/us
r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.20.203.0.jar:/usr/l
ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjt
ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commons-beanutils-1.7.0.jar:/usr/loc
al/hadoop/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/bin/../l
ib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/l
ocal/hadoop/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/bin/../li
b/commons-configuration-1.6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0.
1.jar:/usr/local/hadoop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi
n/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0.
1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/..
/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api-
1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local/hadoop/bi
n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/
local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson
-core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.0.1.jar:/u
sr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../l
ib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/
local/hadoop/bin/../lib/jetty-6.1.26.jar:/usr/local/hadoop/bin/../lib/jetty-util
-6.1.26.jar:/usr/local/hadoop/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/.
./lib/junit-4.5.jar:/usr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop
/bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar:
/usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet-
api-2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/local
/hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/xmlenc-0
.52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/bin/.
./lib/jsp-2.1/jsp-api-2.1.jar
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:java.library.path
=/usr/local/hadoop/bin/../lib/native/Linux-amd64-64
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/t mp
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=<NA >
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:os.version=3.8.0- 23-generic
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:user.name
=hduser
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:user.home=/home/h duser
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/home/hd user
14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating client connection,
connec tString=localhost:22181 sessionTimeout=60000
watcher=org.apache.giraph.job.JobPr ogressTracker@599a2875
14/03/31 15:47:56 INFO mapred.JobClient: Running job: job_201403310811_0012
14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection to
server  localhost/127.0.0.1:22181. Will not attempt to authenticate using
SASL (unknown  error)
14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection established
to lo calhost/127.0.0.1:22181, initiating session
14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Session establishment complete
on s erver localhost/127.0.0.1:22181, sessionid = 0x14518d346810002,
negotiated timeo ut = 600000
14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers -
Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
average 50.18MB
14/03/31 15:47:57 INFO mapred.JobClient:  map 50% reduce 0%
14/03/31 15:48:00 INFO mapred.JobClient:  map 100% reduce 0%
14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
average 50.18MB
14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
average 50.18MB
14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
average 50.18MB
14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15:48:21 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15:48:31 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15:48:41 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15:48:51 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
14/03/31 15:49:01 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
14/03/31 15:49:11 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:21 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:31 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
14/03/31 15:49:41 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
14/03/31 15:49:51 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
14/03/31 15:50:01 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers -
Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
^Z

Re: why this messages?

Posted by nishant gandhi <ni...@gmail.com>.
Hi Liannet,
I checked my input file and it is fine.. No extra space/newline at the end
of file.
I need to kill the job by "hadoop job -kill <jobID>" command.
I could not find something helpful in log files.



On Tue, Apr 1, 2014 at 9:57 PM, Liannet Reyes <li...@gmail.com> wrote:

> Hi Nishant,
>
> Have you look at the jobtracker logs? (localhost:50030/jobtracker.jsp)
> It is likely you will find the cause of the failure in the jobtasks logs.
>
>  Be sure you the tiny_graph file has no empty lines in the end by mistake,
>  that may cause that error.
>
> Also, I've once run into the same "Loading data ... min free memory on
> worker" when I try to use more workers than the mapred.tasktracker.map.tasks.maximum.
> I guest this was the normal behaviour as is the user responsibility to
> guarantee that the number of workers is less than mapred.tasktracker.map.tasks.maximum
> - 1 (master). Am I right?
> However, this is not your case as you are setting w=1
>
> Regards,
> Liannet
>
>
>
> 2014-04-01 13:57 GMT+02:00 nishant gandhi <ni...@gmail.com>:
>
> My code:
>> import java.io.IOException;
>> import java.util.Iterator;
>>
>> import org.apache.hadoop.io.LongWritable;
>> import org.apache.hadoop.io.DoubleWritable;
>> import org.apache.hadoop.io.FloatWritable;
>> import org.apache.giraph.edge.Edge;
>> import org.apache.giraph.graph.Vertex;
>> import org.apache.giraph.graph.BasicComputation;
>>
>> public class InDegree extends
>> BasicComputation<LongWritable,DoubleWritable,FloatWritable,DoubleWritable> {
>>
>>     @Override
>>     public void compute(
>>             Vertex<LongWritable, DoubleWritable, FloatWritable> v,
>>             Iterable<DoubleWritable> msg) throws IOException {
>>         // TODO Auto-generated method stub
>>
>>
>>         if(getSuperstep()==0)
>>         {
>>             Iterable< Edge<LongWritable,FloatWritable> > edge =
>> v.getEdges();
>>
>>             for(Edge<LongWritable,FloatWritable> i: edge)
>>             {
>>                 sendMessage(i.getTargetVertexId(),new DoubleWritable(1));
>>             }
>>         }
>>         else
>>         {
>>             long sum=0;
>>             for (Iterator<DoubleWritable> iterator = msg.iterator();
>> iterator.hasNext();)
>>             {
>>                 sum++;
>>             }
>>             v.setValue(new DoubleWritable(sum));
>>             v.voteToHalt();
>>         }
>>
>>     }
>>
>> }
>>
>>
>> How i am running it:
>>
>> hadoop jar
>> /usr/local/giraph/giraph-examples/target/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-1.2.1-jar-with-dependencies.jar
>> org.apache.giraph.GiraphRunner InDegree  -vif
>> org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
>> -vip /input/tiny_graph.txt -vof
>> org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexOutputFormat
>> -op /output/InDegree -w 1
>>
>> I am using that same classic example tiny_graph file
>>
>>
>> On Tue, Apr 1, 2014 at 3:17 PM, ghufran malik <gh...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>>
>>> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>> average 50.18MB
>>>
>>> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>> average 50.18MB
>>> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>> average 50.18MB
>>>
>>> I may be wrong but, I have received this output before, and it had
>>> something to do with the format of my text file. Is your Input Format class
>>> splitting the line by the separator pattern [\t ] ? if so are you
>>> separating the values in your .txt file by a space or by a tab space?
>>>
>>> Ghufran
>>>
>>>
>>>
>>> On Tue, Apr 1, 2014 at 6:02 AM, Agrta Rawat <ag...@gmail.com>wrote:
>>>
>>>> Perhaps you have not specified EdgeInputFormat and EdgeOutFormat in
>>>> your jar run command. And it is just a message not exception as you can see
>>>> that your task runs.
>>>>
>>>> Regards,
>>>> Agrta Rawat
>>>>
>>>>
>>>> On Mon, Mar 31, 2014 at 10:09 PM, nishant gandhi <
>>>> nishantgandhi99@gmail.com> wrote:
>>>>
>>>>> Why this kind of error comes? What could be wrong? Is it related with
>>>>> hadoop configuration or giraph code?
>>>>>
>>>>>
>>>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input format
>>>>> specified.  Ensure your InputFormat does not require one.
>>>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format
>>>>> specified . Ensure your OutputFormat does not require one.
>>>>> 14/03/31 15:47:30 INFO job.GiraphJob: run: Since checkpointing is
>>>>> disabled (defa ult), do not allow any task retries (setting
>>>>> mapred.map.max.attempts = 0, old va lue = 4)
>>>>> 14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL:
>>>>> http://localhost:50030/ jobdetails.jsp?jobid=job_201403310811_0012
>>>>> 14/03/31 15:47:56 INFO
>>>>> job.HaltApplicationUtils$DefaultHaltInstructionsWriter: w
>>>>> riteHaltInstructions: To halt after next superstep execute:
>>>>> 'bin/halt-applicatio n --zkServer localhost:22181 --zkNode
>>>>> /_hadoopBsp/job_201403310811_0012/_haltCom putation'
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:zookeeper.version =3.4.5-1392090, built on 09/30/2012 17:52 GMT
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:
>>>>> host.name=localho st
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:java.version=1.7. 0_21
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:java.vendor=Oracl e Corporation
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:java.home=/usr/li b/jvm/java-7-openjdk-amd64/jre
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:java.class.path=/
>>>>> usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/us
>>>>> r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.20.203.0.jar:/usr/l
>>>>> ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjt
>>>>> ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commons-beanutils-1.7.0.jar:/usr/loc
>>>>> al/hadoop/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/bin/../l
>>>>> ib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/l
>>>>> ocal/hadoop/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/bin/../li
>>>>> b/commons-configuration-1.6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0.
>>>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi
>>>>> n/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0.
>>>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/..
>>>>> /lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api-
>>>>> 1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local/hadoop/bi
>>>>> n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/
>>>>> local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson
>>>>> -core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.0.1.jar:/u
>>>>> sr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../l
>>>>> ib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/
>>>>> local/hadoop/bin/../lib/jetty-6.1.26.jar:/usr/local/hadoop/bin/../lib/jetty-util
>>>>> -6.1.26.jar:/usr/local/hadoop/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/.
>>>>> ./lib/junit-4.5.jar:/usr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop
>>>>> /bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar:
>>>>> /usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet-
>>>>> api-2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/local
>>>>> /hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/xmlenc-0
>>>>> .52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/bin/.
>>>>> ./lib/jsp-2.1/jsp-api-2.1.jar
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:java.library.path
>>>>> =/usr/local/hadoop/bin/../lib/native/Linux-amd64-64
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:java.io.tmpdir=/t mp
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:java.compiler=<NA >
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.name
>>>>> =Linux
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:os.arch=amd64
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:os.version=3.8.0- 23-generic
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:
>>>>> user.name=hduser
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:user.home=/home/h duser
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>>> environment:user.dir=/home/hd user
>>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating client
>>>>> connection, connec tString=localhost:22181 sessionTimeout=60000
>>>>> watcher=org.apache.giraph.job.JobPr ogressTracker@599a2875
>>>>> 14/03/31 15:47:56 INFO mapred.JobClient: Running job:
>>>>> job_201403310811_0012
>>>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection
>>>>> to server  localhost/127.0.0.1:22181. Will not attempt to
>>>>> authenticate using SASL (unknown  error)
>>>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection
>>>>> established to lo calhost/127.0.0.1:22181, initiating session
>>>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Session establishment
>>>>> complete on s erver localhost/127.0.0.1:22181, sessionid =
>>>>> 0x14518d346810002, negotiated timeo ut = 600000
>>>>> 14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>>> average 50.18MB
>>>>> 14/03/31 15:47:57 INFO mapred.JobClient:  map 50% reduce 0%
>>>>> 14/03/31 15:48:00 INFO mapred.JobClient:  map 100% reduce 0%
>>>>> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>>> average 50.18MB
>>>>> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>>> average 50.18MB
>>>>> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>>> average 50.18MB
>>>>> 14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>>> 14/03/31 15:48:21 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>>> 14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>>> 14/03/31 15:48:31 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>>> 14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>>> 14/03/31 15:48:41 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>>> 14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>>> 14/03/31 15:48:51 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>>> 14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>>>> 14/03/31 15:49:01 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>>>> 14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>>>> 14/03/31 15:49:11 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>>> 14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>>> 14/03/31 15:49:21 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>>> 14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>>> 14/03/31 15:49:31 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>>> 14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>>> 14/03/31 15:49:41 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>>>>> 14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>>>>> 14/03/31 15:49:51 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>>> 14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>>> 14/03/31 15:50:01 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>>> 14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers -
>>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>>> ^Z
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: why this messages?

Posted by Liannet Reyes <li...@gmail.com>.
Hi Nishant,

Have you look at the jobtracker logs? (localhost:50030/jobtracker.jsp)
It is likely you will find the cause of the failure in the jobtasks logs.

Be sure you the tiny_graph file has no empty lines in the end by mistake,
 that may cause that error.

Also, I've once run into the same "Loading data ... min free memory on
worker" when I try to use more workers than the
mapred.tasktracker.map.tasks.maximum.
I guest this was the normal behaviour as is the user responsibility to
guarantee that the number of workers is less than
mapred.tasktracker.map.tasks.maximum
- 1 (master). Am I right?
However, this is not your case as you are setting w=1

Regards,
Liannet



2014-04-01 13:57 GMT+02:00 nishant gandhi <ni...@gmail.com>:

> My code:
> import java.io.IOException;
> import java.util.Iterator;
>
> import org.apache.hadoop.io.LongWritable;
> import org.apache.hadoop.io.DoubleWritable;
> import org.apache.hadoop.io.FloatWritable;
> import org.apache.giraph.edge.Edge;
> import org.apache.giraph.graph.Vertex;
> import org.apache.giraph.graph.BasicComputation;
>
> public class InDegree extends
> BasicComputation<LongWritable,DoubleWritable,FloatWritable,DoubleWritable> {
>
>     @Override
>     public void compute(
>             Vertex<LongWritable, DoubleWritable, FloatWritable> v,
>             Iterable<DoubleWritable> msg) throws IOException {
>         // TODO Auto-generated method stub
>
>
>         if(getSuperstep()==0)
>         {
>             Iterable< Edge<LongWritable,FloatWritable> > edge =
> v.getEdges();
>
>             for(Edge<LongWritable,FloatWritable> i: edge)
>             {
>                 sendMessage(i.getTargetVertexId(),new DoubleWritable(1));
>             }
>         }
>         else
>         {
>             long sum=0;
>             for (Iterator<DoubleWritable> iterator = msg.iterator();
> iterator.hasNext();)
>             {
>                 sum++;
>             }
>             v.setValue(new DoubleWritable(sum));
>             v.voteToHalt();
>         }
>
>     }
>
> }
>
>
> How i am running it:
>
> hadoop jar
> /usr/local/giraph/giraph-examples/target/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-1.2.1-jar-with-dependencies.jar
> org.apache.giraph.GiraphRunner InDegree  -vif
> org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
> -vip /input/tiny_graph.txt -vof
> org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexOutputFormat
> -op /output/InDegree -w 1
>
> I am using that same classic example tiny_graph file
>
>
> On Tue, Apr 1, 2014 at 3:17 PM, ghufran malik <gh...@gmail.com>wrote:
>
>> Hi,
>>
>>
>> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>> average 50.18MB
>>
>> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>> average 50.18MB
>> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>> average 50.18MB
>>
>> I may be wrong but, I have received this output before, and it had
>> something to do with the format of my text file. Is your Input Format class
>> splitting the line by the separator pattern [\t ] ? if so are you
>> separating the values in your .txt file by a space or by a tab space?
>>
>> Ghufran
>>
>>
>>
>> On Tue, Apr 1, 2014 at 6:02 AM, Agrta Rawat <ag...@gmail.com>wrote:
>>
>>> Perhaps you have not specified EdgeInputFormat and EdgeOutFormat in your
>>> jar run command. And it is just a message not exception as you can see that
>>> your task runs.
>>>
>>> Regards,
>>> Agrta Rawat
>>>
>>>
>>> On Mon, Mar 31, 2014 at 10:09 PM, nishant gandhi <
>>> nishantgandhi99@gmail.com> wrote:
>>>
>>>> Why this kind of error comes? What could be wrong? Is it related with
>>>> hadoop configuration or giraph code?
>>>>
>>>>
>>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input format
>>>> specified.  Ensure your InputFormat does not require one.
>>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format
>>>> specified . Ensure your OutputFormat does not require one.
>>>> 14/03/31 15:47:30 INFO job.GiraphJob: run: Since checkpointing is
>>>> disabled (defa ult), do not allow any task retries (setting
>>>> mapred.map.max.attempts = 0, old va lue = 4)
>>>> 14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL:
>>>> http://localhost:50030/ jobdetails.jsp?jobid=job_201403310811_0012
>>>> 14/03/31 15:47:56 INFO
>>>> job.HaltApplicationUtils$DefaultHaltInstructionsWriter: w
>>>> riteHaltInstructions: To halt after next superstep execute:
>>>> 'bin/halt-applicatio n --zkServer localhost:22181 --zkNode
>>>> /_hadoopBsp/job_201403310811_0012/_haltCom putation'
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:zookeeper.version =3.4.5-1392090, built on 09/30/2012 17:52 GMT
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:
>>>> host.name=localho st
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:java.version=1.7. 0_21
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:java.vendor=Oracl e Corporation
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:java.home=/usr/li b/jvm/java-7-openjdk-amd64/jre
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:java.class.path=/
>>>> usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/us
>>>> r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.20.203.0.jar:/usr/l
>>>> ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjt
>>>> ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commons-beanutils-1.7.0.jar:/usr/loc
>>>> al/hadoop/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/bin/../l
>>>> ib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/l
>>>> ocal/hadoop/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/bin/../li
>>>> b/commons-configuration-1.6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0.
>>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi
>>>> n/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0.
>>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/..
>>>> /lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api-
>>>> 1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local/hadoop/bi
>>>> n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/
>>>> local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson
>>>> -core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.0.1.jar:/u
>>>> sr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../l
>>>> ib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/
>>>> local/hadoop/bin/../lib/jetty-6.1.26.jar:/usr/local/hadoop/bin/../lib/jetty-util
>>>> -6.1.26.jar:/usr/local/hadoop/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/.
>>>> ./lib/junit-4.5.jar:/usr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop
>>>> /bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar:
>>>> /usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet-
>>>> api-2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/local
>>>> /hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/xmlenc-0
>>>> .52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/bin/.
>>>> ./lib/jsp-2.1/jsp-api-2.1.jar
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:java.library.path
>>>> =/usr/local/hadoop/bin/../lib/native/Linux-amd64-64
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:java.io.tmpdir=/t mp
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:java.compiler=<NA >
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.name
>>>> =Linux
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:os.arch=amd64
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:os.version=3.8.0- 23-generic
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:
>>>> user.name=hduser
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:user.home=/home/h duser
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>>> environment:user.dir=/home/hd user
>>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating client
>>>> connection, connec tString=localhost:22181 sessionTimeout=60000
>>>> watcher=org.apache.giraph.job.JobPr ogressTracker@599a2875
>>>> 14/03/31 15:47:56 INFO mapred.JobClient: Running job:
>>>> job_201403310811_0012
>>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection
>>>> to server  localhost/127.0.0.1:22181. Will not attempt to authenticate
>>>> using SASL (unknown  error)
>>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection
>>>> established to lo calhost/127.0.0.1:22181, initiating session
>>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Session establishment
>>>> complete on s erver localhost/127.0.0.1:22181, sessionid =
>>>> 0x14518d346810002, negotiated timeo ut = 600000
>>>> 14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>> average 50.18MB
>>>> 14/03/31 15:47:57 INFO mapred.JobClient:  map 50% reduce 0%
>>>> 14/03/31 15:48:00 INFO mapred.JobClient:  map 100% reduce 0%
>>>> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>> average 50.18MB
>>>> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>> average 50.18MB
>>>> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>>> average 50.18MB
>>>> 14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>> 14/03/31 15:48:21 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>> 14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>> 14/03/31 15:48:31 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>>> 14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>> 14/03/31 15:48:41 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>> 14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>> 14/03/31 15:48:51 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>>> 14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>>> 14/03/31 15:49:01 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>>> 14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>>> 14/03/31 15:49:11 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>> 14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>> 14/03/31 15:49:21 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>> 14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>> 14/03/31 15:49:31 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>> 14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>>> 14/03/31 15:49:41 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>>>> 14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>>>> 14/03/31 15:49:51 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>> 14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>> 14/03/31 15:50:01 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>> 14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers -
>>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>>> ^Z
>>>>
>>>>
>>>
>>
>

Re: why this messages?

Posted by nishant gandhi <ni...@gmail.com>.
My code:
import java.io.IOException;
import java.util.Iterator;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.FloatWritable;
import org.apache.giraph.edge.Edge;
import org.apache.giraph.graph.Vertex;
import org.apache.giraph.graph.BasicComputation;

public class InDegree extends
BasicComputation<LongWritable,DoubleWritable,FloatWritable,DoubleWritable> {

    @Override
    public void compute(
            Vertex<LongWritable, DoubleWritable, FloatWritable> v,
            Iterable<DoubleWritable> msg) throws IOException {
        // TODO Auto-generated method stub


        if(getSuperstep()==0)
        {
            Iterable< Edge<LongWritable,FloatWritable> > edge =
v.getEdges();

            for(Edge<LongWritable,FloatWritable> i: edge)
            {
                sendMessage(i.getTargetVertexId(),new DoubleWritable(1));
            }
        }
        else
        {
            long sum=0;
            for (Iterator<DoubleWritable> iterator = msg.iterator();
iterator.hasNext();)
            {
                sum++;
            }
            v.setValue(new DoubleWritable(sum));
            v.voteToHalt();
        }

    }

}


How i am running it:

hadoop jar
/usr/local/giraph/giraph-examples/target/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-1.2.1-jar-with-dependencies.jar
org.apache.giraph.GiraphRunner InDegree  -vif
org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
-vip /input/tiny_graph.txt -vof
org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexOutputFormat
-op /output/InDegree -w 1

I am using that same classic example tiny_graph file


On Tue, Apr 1, 2014 at 3:17 PM, ghufran malik <gh...@gmail.com>wrote:

> Hi,
>
>
> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
> average 50.18MB
>
> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
> average 50.18MB
> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
> average 50.18MB
>
> I may be wrong but, I have received this output before, and it had
> something to do with the format of my text file. Is your Input Format class
> splitting the line by the separator pattern [\t ] ? if so are you
> separating the values in your .txt file by a space or by a tab space?
>
> Ghufran
>
>
>
> On Tue, Apr 1, 2014 at 6:02 AM, Agrta Rawat <ag...@gmail.com> wrote:
>
>> Perhaps you have not specified EdgeInputFormat and EdgeOutFormat in your
>> jar run command. And it is just a message not exception as you can see that
>> your task runs.
>>
>> Regards,
>> Agrta Rawat
>>
>>
>> On Mon, Mar 31, 2014 at 10:09 PM, nishant gandhi <
>> nishantgandhi99@gmail.com> wrote:
>>
>>> Why this kind of error comes? What could be wrong? Is it related with
>>> hadoop configuration or giraph code?
>>>
>>>
>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input format
>>> specified.  Ensure your InputFormat does not require one.
>>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format
>>> specified . Ensure your OutputFormat does not require one.
>>> 14/03/31 15:47:30 INFO job.GiraphJob: run: Since checkpointing is
>>> disabled (defa ult), do not allow any task retries (setting
>>> mapred.map.max.attempts = 0, old va lue = 4)
>>> 14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL:
>>> http://localhost:50030/ jobdetails.jsp?jobid=job_201403310811_0012
>>> 14/03/31 15:47:56 INFO
>>> job.HaltApplicationUtils$DefaultHaltInstructionsWriter: w
>>> riteHaltInstructions: To halt after next superstep execute:
>>> 'bin/halt-applicatio n --zkServer localhost:22181 --zkNode
>>> /_hadoopBsp/job_201403310811_0012/_haltCom putation'
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:zookeeper.version =3.4.5-1392090, built on 09/30/2012 17:52 GMT
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:host.name=localho
>>> st
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:java.version=1.7. 0_21
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:java.vendor=Oracl e Corporation
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:java.home=/usr/li b/jvm/java-7-openjdk-amd64/jre
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:java.class.path=/
>>> usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/us
>>> r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.20.203.0.jar:/usr/l
>>> ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjt
>>> ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commons-beanutils-1.7.0.jar:/usr/loc
>>> al/hadoop/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/bin/../l
>>> ib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/l
>>> ocal/hadoop/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/bin/../li
>>> b/commons-configuration-1.6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0.
>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi
>>> n/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0.
>>> 1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/..
>>> /lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api-
>>> 1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local/hadoop/bi
>>> n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/
>>> local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson
>>> -core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.0.1.jar:/u
>>> sr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../l
>>> ib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/
>>> local/hadoop/bin/../lib/jetty-6.1.26.jar:/usr/local/hadoop/bin/../lib/jetty-util
>>> -6.1.26.jar:/usr/local/hadoop/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/.
>>> ./lib/junit-4.5.jar:/usr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop
>>> /bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar:
>>> /usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet-
>>> api-2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/local
>>> /hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/xmlenc-0
>>> .52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/bin/.
>>> ./lib/jsp-2.1/jsp-api-2.1.jar
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:java.library.path
>>> =/usr/local/hadoop/bin/../lib/native/Linux-amd64-64
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:java.io.tmpdir=/t mp
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:java.compiler=<NA >
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.name
>>> =Linux
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:os.arch=amd64
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:os.version=3.8.0- 23-generic
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:user.name
>>> =hduser
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:user.home=/home/h duser
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>>> environment:user.dir=/home/hd user
>>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating client
>>> connection, connec tString=localhost:22181 sessionTimeout=60000
>>> watcher=org.apache.giraph.job.JobPr ogressTracker@599a2875
>>> 14/03/31 15:47:56 INFO mapred.JobClient: Running job:
>>> job_201403310811_0012
>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection
>>> to server  localhost/127.0.0.1:22181. Will not attempt to authenticate
>>> using SASL (unknown  error)
>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection
>>> established to lo calhost/127.0.0.1:22181, initiating session
>>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Session establishment
>>> complete on s erver localhost/127.0.0.1:22181, sessionid =
>>> 0x14518d346810002, negotiated timeo ut = 600000
>>> 14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers -
>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>> average 50.18MB
>>> 14/03/31 15:47:57 INFO mapred.JobClient:  map 50% reduce 0%
>>> 14/03/31 15:48:00 INFO mapred.JobClient:  map 100% reduce 0%
>>> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>> average 50.18MB
>>> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>> average 50.18MB
>>> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
>>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>>> average 50.18MB
>>> 14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>> 14/03/31 15:48:21 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>> 14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>> 14/03/31 15:48:31 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>>> 14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>> 14/03/31 15:48:41 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>> 14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>> 14/03/31 15:48:51 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>>> 14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>> 14/03/31 15:49:01 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>> 14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>>> 14/03/31 15:49:11 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>> 14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>> 14/03/31 15:49:21 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>> 14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>> 14/03/31 15:49:31 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>> 14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>>> 14/03/31 15:49:41 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>>> 14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>>> 14/03/31 15:49:51 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>> 14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>> 14/03/31 15:50:01 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>> 14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers -
>>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>>> ^Z
>>>
>>>
>>
>

Re: why this messages?

Posted by ghufran malik <gh...@gmail.com>.
Hi,

14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
average 50.18MB
14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
average 50.18MB
14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
average 50.18MB

I may be wrong but, I have received this output before, and it had
something to do with the format of my text file. Is your Input Format class
splitting the line by the separator pattern [\t ] ? if so are you
separating the values in your .txt file by a space or by a tab space?

Ghufran



On Tue, Apr 1, 2014 at 6:02 AM, Agrta Rawat <ag...@gmail.com> wrote:

> Perhaps you have not specified EdgeInputFormat and EdgeOutFormat in your
> jar run command. And it is just a message not exception as you can see that
> your task runs.
>
> Regards,
> Agrta Rawat
>
>
> On Mon, Mar 31, 2014 at 10:09 PM, nishant gandhi <
> nishantgandhi99@gmail.com> wrote:
>
>> Why this kind of error comes? What could be wrong? Is it related with
>> hadoop configuration or giraph code?
>>
>>
>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input format
>> specified.  Ensure your InputFormat does not require one.
>> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format
>> specified . Ensure your OutputFormat does not require one.
>> 14/03/31 15:47:30 INFO job.GiraphJob: run: Since checkpointing is
>> disabled (defa ult), do not allow any task retries (setting
>> mapred.map.max.attempts = 0, old va lue = 4)
>> 14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL:
>> http://localhost:50030/ jobdetails.jsp?jobid=job_201403310811_0012
>> 14/03/31 15:47:56 INFO
>> job.HaltApplicationUtils$DefaultHaltInstructionsWriter: w
>> riteHaltInstructions: To halt after next superstep execute:
>> 'bin/halt-applicatio n --zkServer localhost:22181 --zkNode
>> /_hadoopBsp/job_201403310811_0012/_haltCom putation'
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:zookeeper.version =3.4.5-1392090, built on 09/30/2012 17:52 GMT
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:host.name=localho
>> st
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:java.version=1.7. 0_21
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:java.vendor=Oracl e Corporation
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:java.home=/usr/li b/jvm/java-7-openjdk-amd64/jre
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:java.class.path=/
>> usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/us
>> r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.20.203.0.jar:/usr/l
>> ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjt
>> ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commons-beanutils-1.7.0.jar:/usr/loc
>> al/hadoop/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/bin/../l
>> ib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/l
>> ocal/hadoop/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/bin/../li
>> b/commons-configuration-1.6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0.
>> 1.jar:/usr/local/hadoop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi
>> n/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0.
>> 1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/..
>> /lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api-
>> 1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local/hadoop/bi
>> n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/
>> local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson
>> -core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.0.1.jar:/u
>> sr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../l
>> ib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/
>> local/hadoop/bin/../lib/jetty-6.1.26.jar:/usr/local/hadoop/bin/../lib/jetty-util
>> -6.1.26.jar:/usr/local/hadoop/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/.
>> ./lib/junit-4.5.jar:/usr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop
>> /bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar:
>> /usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet-
>> api-2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/local
>> /hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/xmlenc-0
>> .52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/bin/.
>> ./lib/jsp-2.1/jsp-api-2.1.jar
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:java.library.path
>> =/usr/local/hadoop/bin/../lib/native/Linux-amd64-64
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:java.io.tmpdir=/t mp
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:java.compiler=<NA >
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.name
>> =Linux
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:os.arch=amd64
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:os.version=3.8.0- 23-generic
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:user.name
>> =hduser
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:user.home=/home/h duser
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
>> environment:user.dir=/home/hd user
>> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating client connection,
>> connec tString=localhost:22181 sessionTimeout=60000
>> watcher=org.apache.giraph.job.JobPr ogressTracker@599a2875
>> 14/03/31 15:47:56 INFO mapred.JobClient: Running job:
>> job_201403310811_0012
>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection to
>> server  localhost/127.0.0.1:22181. Will not attempt to authenticate
>> using SASL (unknown  error)
>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection
>> established to lo calhost/127.0.0.1:22181, initiating session
>> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Session establishment
>> complete on s erver localhost/127.0.0.1:22181, sessionid =
>> 0x14518d346810002, negotiated timeo ut = 600000
>> 14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers -
>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>> average 50.18MB
>> 14/03/31 15:47:57 INFO mapred.JobClient:  map 50% reduce 0%
>> 14/03/31 15:48:00 INFO mapred.JobClient:  map 100% reduce 0%
>> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>> average 50.18MB
>> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>> average 50.18MB
>> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
>> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
>> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
>> average 50.18MB
>> 14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>> 14/03/31 15:48:21 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>> 14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>> 14/03/31 15:48:31 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
>> 14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>> 14/03/31 15:48:41 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>> 14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>> 14/03/31 15:48:51 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
>> 14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>> 14/03/31 15:49:01 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>> 14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
>> 14/03/31 15:49:11 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>> 14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>> 14/03/31 15:49:21 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>> 14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>> 14/03/31 15:49:31 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>> 14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
>> 14/03/31 15:49:41 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>> 14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
>> 14/03/31 15:49:51 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>> 14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>> 14/03/31 15:50:01 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>> 14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers -
>> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
>> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
>> ^Z
>>
>>
>

Re: why this messages?

Posted by Agrta Rawat <ag...@gmail.com>.
Perhaps you have not specified EdgeInputFormat and EdgeOutFormat in your
jar run command. And it is just a message not exception as you can see that
your task runs.

Regards,
Agrta Rawat


On Mon, Mar 31, 2014 at 10:09 PM, nishant gandhi
<ni...@gmail.com>wrote:

> Why this kind of error comes? What could be wrong? Is it related with
> hadoop configuration or giraph code?
>
>
> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge input format
> specified.  Ensure your InputFormat does not require one.
> 14/03/31 15:47:29 INFO utils.ConfigurationUtils: No edge output format
> specified . Ensure your OutputFormat does not require one.
> 14/03/31 15:47:30 INFO job.GiraphJob: run: Since checkpointing is disabled
> (defa ult), do not allow any task retries (setting mapred.map.max.attempts
> = 0, old va lue = 4)
> 14/03/31 15:47:31 INFO job.GiraphJob: run: Tracking URL:
> http://localhost:50030/ jobdetails.jsp?jobid=job_201403310811_0012
> 14/03/31 15:47:56 INFO
> job.HaltApplicationUtils$DefaultHaltInstructionsWriter: w
> riteHaltInstructions: To halt after next superstep execute:
> 'bin/halt-applicatio n --zkServer localhost:22181 --zkNode
> /_hadoopBsp/job_201403310811_0012/_haltCom putation'
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version =3.4.5-1392090, built on 09/30/2012 17:52 GMT
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:host.name=localho
> st
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.7. 0_21
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:java.vendor=Oracl e Corporation
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/usr/li b/jvm/java-7-openjdk-amd64/jre
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=/
> usr/local/hadoop/bin/../conf:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/us
> r/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-core-0.20.203.0.jar:/usr/l
> ocal/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjt
> ools-1.6.5.jar:/usr/local/hadoop/bin/../lib/commons-beanutils-1.7.0.jar:/usr/loc
> al/hadoop/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/bin/../l
> ib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/l
> ocal/hadoop/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/bin/../li
> b/commons-configuration-1.6.jar:/usr/local/hadoop/bin/../lib/commons-daemon-1.0.
> 1.jar:/usr/local/hadoop/bin/../lib/commons-digester-1.8.jar:/usr/local/hadoop/bi
> n/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.0.
> 1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.4.jar:/usr/local/hadoop/bin/..
> /lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api-
> 1.0.4.jar:/usr/local/hadoop/bin/../lib/commons-math-2.1.jar:/usr/local/hadoop/bi
> n/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/
> local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson
> -core-asl-1.0.1.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.0.1.jar:/u
> sr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../l
> ib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jets3t-0.6.1.jar:/usr/
> local/hadoop/bin/../lib/jetty-6.1.26.jar:/usr/local/hadoop/bin/../lib/jetty-util
> -6.1.26.jar:/usr/local/hadoop/bin/../lib/jsch-0.1.42.jar:/usr/local/hadoop/bin/.
> ./lib/junit-4.5.jar:/usr/local/hadoop/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop
> /bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.5.jar:
> /usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/servlet-
> api-2.5-20081211.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.4.3.jar:/usr/local
> /hadoop/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/bin/../lib/xmlenc-0
> .52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/bin/.
> ./lib/jsp-2.1/jsp-api-2.1.jar
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path
> =/usr/local/hadoop/bin/../lib/native/Linux-amd64-64
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/t mp
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA >
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:os.name
> =Linux
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:os.version=3.8.0- 23-generic
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client environment:user.name
> =hduser
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/home/h duser
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/home/hd user
> 14/03/31 15:47:56 INFO zookeeper.ZooKeeper: Initiating client connection,
> connec tString=localhost:22181 sessionTimeout=60000
> watcher=org.apache.giraph.job.JobPr ogressTracker@599a2875
> 14/03/31 15:47:56 INFO mapred.JobClient: Running job: job_201403310811_0012
> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Opening socket connection to
> server  localhost/127.0.0.1:22181. Will not attempt to authenticate using
> SASL (unknown  error)
> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Socket connection established
> to lo calhost/127.0.0.1:22181, initiating session
> 14/03/31 15:47:56 INFO zookeeper.ClientCnxn: Session establishment
> complete on s erver localhost/127.0.0.1:22181, sessionid =
> 0x14518d346810002, negotiated timeo ut = 600000
> 14/03/31 15:47:56 INFO job.JobProgressTracker: Data from 1 workers -
> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
> average 50.18MB
> 14/03/31 15:47:57 INFO mapred.JobClient:  map 50% reduce 0%
> 14/03/31 15:48:00 INFO mapred.JobClient:  map 100% reduce 0%
> 14/03/31 15:48:01 INFO job.JobProgressTracker: Data from 1 workers -
> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
> average 50.18MB
> 14/03/31 15:48:06 INFO job.JobProgressTracker: Data from 1 workers -
> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
> average 50.18MB
> 14/03/31 15:48:11 INFO job.JobProgressTracker: Data from 1 workers -
> Loading dat a: 0 vertices loaded, 0 vertex input splits loaded; 0 edges
> loaded, 0 edge input  splits loaded; min free memory on worker 1 - 50.18MB,
> average 50.18MB
> 14/03/31 15:48:16 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
> 14/03/31 15:48:21 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
> 14/03/31 15:48:26 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
> 14/03/31 15:48:31 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.47MB, average 70.47MB
> 14/03/31 15:48:36 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
> 14/03/31 15:48:41 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
> 14/03/31 15:48:46 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
> 14/03/31 15:48:51 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 70.29MB, average 70.29MB
> 14/03/31 15:48:56 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
> 14/03/31 15:49:01 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
> 14/03/31 15:49:06 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.44MB, average 69.44MB
> 14/03/31 15:49:11 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
> 14/03/31 15:49:16 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
> 14/03/31 15:49:21 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
> 14/03/31 15:49:26 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
> 14/03/31 15:49:31 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
> 14/03/31 15:49:36 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.22MB, average 69.22MB
> 14/03/31 15:49:41 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
> 14/03/31 15:49:46 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 69.21MB, average 69.21MB
> 14/03/31 15:49:51 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
> 14/03/31 15:49:56 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
> 14/03/31 15:50:01 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
> 14/03/31 15:50:06 INFO job.JobProgressTracker: Data from 1 workers -
> Compute sup erstep 1: 0 out of 5 vertices computed; 0 out of 1 partitions
> computed; min free  memory on worker 1 - 68.86MB, average 68.86MB
> ^Z
>
>