You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@giraph.apache.org by di...@apache.org on 2018/05/02 17:56:18 UTC

git commit: updated refs/heads/trunk to 94d60e2

Repository: giraph
Updated Branches:
  refs/heads/trunk 60752aab7 -> 94d60e2f8


GIRAPH-1191

closes #31


Project: http://git-wip-us.apache.org/repos/asf/giraph/repo
Commit: http://git-wip-us.apache.org/repos/asf/giraph/commit/94d60e2f
Tree: http://git-wip-us.apache.org/repos/asf/giraph/tree/94d60e2f
Diff: http://git-wip-us.apache.org/repos/asf/giraph/diff/94d60e2f

Branch: refs/heads/trunk
Commit: 94d60e2f800a47f06ca1e6a6fcae8c455670f10e
Parents: 60752aa
Author: Gabor Szarnyas <sz...@mit.bme.hu>
Authored: Wed May 2 10:55:19 2018 -0700
Committer: Dionysios Logothetis <di...@fb.com>
Committed: Wed May 2 10:55:19 2018 -0700

----------------------------------------------------------------------
 src/site/xdoc/quick_start.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/giraph/blob/94d60e2f/src/site/xdoc/quick_start.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/quick_start.xml b/src/site/xdoc/quick_start.xml
index 93aace3..65b5f68 100644
--- a/src/site/xdoc/quick_start.xml
+++ b/src/site/xdoc/quick_start.xml
@@ -82,13 +82,13 @@ export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true</source>
       <p>The second line will force Hadoop to use IPv4 instead of IPv6, even if IPv6 is configured on the machine. As Hadoop stores temporary files during its computation, you need to create a base temporary directorty for local FS and HDFS files as follows:</p>
       <source>
-su – hdadmin
+su - hdadmin
 sudo mkdir -p /app/hadoop/tmp
 sudo chown hduser:hadoop /app/hadoop/tmp
 sudo chmod 750 /app/hadoop/tmp</source>
       <p>Make sure the <tt>/etc/hosts</tt> file has the following lines (if not, add/update them):</p>
       <source>
-172.0.0.1       localhost
+127.0.0.1       localhost
 192.168.56.10   hdnode01</source>
       <p>Even though we can use <tt>localhost</tt> for all communication within this single-node cluster, using the hostname is generally a better practice (e.g., you might add a new node and convert your single-node, pseudo-distributed cluster to multi-node, distributed cluster).</p>
       <p>Now, edit Hadoop configuration files <tt>core-site.xml</tt>, <tt>mapred-site.xml</tt>, and <tt>hdfs-site.xml</tt> under <tt>$HADOOP_HOME/conf</tt> to reflect the current setup. Add the new lines between <tt>&lt;configuration&gt;...&lt;/configuration&gt;</tt>, as specified below:</p>
@@ -129,7 +129,7 @@ sudo chmod 750 /app/hadoop/tmp</source>
       </ul>
       <p>Next, set up SSH for user account <tt>hduser</tt> so that you do not have to enter a passcode every time an SSH connection is started:</p>
       <source>
-su – hduser
+su - hduser
 ssh-keygen -t rsa -P ""
 cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys</source>
       <p>And then SSH to <tt>hdnode01</tt> under user account <tt>hduser</tt> (this must be to <tt>hdnode01</tt>, as we used the node's hostname in Hadoop configuration). You will be asked for a password if this is the first time you SSH to the node under this user account. When prompted, do store the public RSA key into <tt>$HOME/.ssh/known_hosts</tt>. Once you make sure you can SSH without a passcode/password, edit <tt>$HADOOP_HOME/conf/masters</tt> with this line:</p>