You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@joshua.apache.org by mj...@apache.org on 2016/04/09 05:10:41 UTC

[32/44] incubator-joshua-site git commit: First attempt

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/53cc3005/6.0/jacana.html
----------------------------------------------------------------------
diff --git a/6.0/jacana.html b/6.0/jacana.html
new file mode 100644
index 0000000..b8f5a79
--- /dev/null
+++ b/6.0/jacana.html
@@ -0,0 +1,331 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <meta name="description" content="">
+    <meta name="author" content="">
+    <link rel="icon" href="../../favicon.ico">
+
+    <title>Joshua Documentation | Alignment with Jacana</title>
+
+    <!-- Bootstrap core CSS -->
+    <link href="/dist/css/bootstrap.min.css" rel="stylesheet">
+
+    <!-- Custom styles for this template -->
+    <link href="/joshua6.css" rel="stylesheet">
+  </head>
+
+  <body>
+
+    <div class="blog-masthead">
+      <div class="container">
+        <nav class="blog-nav">
+          <!-- <a class="blog-nav-item active" href="#">Joshua</a> -->
+          <a class="blog-nav-item" href="/">Joshua</a>
+          <!-- <a class="blog-nav-item" href="/6.0/whats-new.html">New features</a> -->
+          <a class="blog-nav-item" href="/language-packs/">Language packs</a>
+          <a class="blog-nav-item" href="/data/">Datasets</a>
+          <a class="blog-nav-item" href="/support/">Support</a>
+          <a class="blog-nav-item" href="/contributors.html">Contributors</a>
+        </nav>
+      </div>
+    </div>
+
+    <div class="container">
+
+      <div class="row">
+
+        <div class="col-sm-2">
+          <div class="sidebar-module">
+            <!-- <h4>About</h4> -->
+            <center>
+            <img src="/images/joshua-logo-small.png" />
+            <p>Joshua machine translation toolkit</p>
+            </center>
+          </div>
+          <hr>
+          <center>
+            <a href="/releases/current/" target="_blank"><button class="button">Download Joshua 6.0.5</button></a>
+            <br />
+            <a href="/releases/runtime/" target="_blank"><button class="button">Runtime only version</button></a>
+            <p>Released November 5, 2015</p>
+          </center>
+          <hr>
+          <!-- <div class="sidebar-module"> -->
+          <!--   <span id="download"> -->
+          <!--     <a href="http://joshua-decoder.org/downloads/joshua-6.0.tgz">Download</a> -->
+          <!--   </span> -->
+          <!-- </div> -->
+          <div class="sidebar-module">
+            <h4>Using Joshua</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/install.html">Installation</a></li>
+              <li><a href="/6.0/quick-start.html">Quick Start</a></li>
+            </ol>
+          </div>
+          <hr>
+          <div class="sidebar-module">
+            <h4>Building new models</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/pipeline.html">Pipeline</a></li>
+              <li><a href="/6.0/tutorial.html">Tutorial</a></li>
+              <li><a href="/6.0/faq.html">FAQ</a></li>
+            </ol>
+          </div>
+<!--
+          <div class="sidebar-module">
+            <h4>Phrase-based</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/phrase.html">Training</a></li>
+            </ol>
+          </div>
+-->
+          <hr>
+          <div class="sidebar-module">
+            <h4>Advanced</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/bundle.html">Building language packs</a></li>
+              <li><a href="/6.0/decoder.html">Decoder options</a></li>
+              <li><a href="/6.0/file-formats.html">File formats</a></li>
+              <li><a href="/6.0/packing.html">Packing TMs</a></li>
+              <li><a href="/6.0/large-lms.html">Building large LMs</a></li>
+            </ol>
+          </div>
+
+          <hr> 
+          <div class="sidebar-module">
+            <h4>Developer</h4>
+            <ol class="list-unstyled">              
+		<li><a href="https://github.com/joshua-decoder/joshua">Github</a></li>
+		<li><a href="http://cs.jhu.edu/~post/joshua-docs">Javadoc</a></li>
+		<li><a href="https://groups.google.com/forum/?fromgroups#!forum/joshua_developers">Mailing list</a></li>              
+            </ol>
+          </div>
+
+        </div><!-- /.blog-sidebar -->
+
+        
+        <div class="col-sm-8 blog-main">
+        
+
+          <div class="blog-title">
+            <h2>Alignment with Jacana</h2>
+          </div>
+          
+          <div class="blog-post">
+
+            <h2 id="introduction">Introduction</h2>
+
+<p>jacana-xy is a token-based word aligner for machine translation, adapted from the original
+English-English word aligner jacana-align described in the following paper:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>A Lightweight and High Performance Monolingual Word Aligner. Xuchen Yao, Benjamin Van Durme,
+Chris Callison-Burch and Peter Clark. Proceedings of ACL 2013, short papers.
+</code></pre>
+</div>
+
+<p>It currently supports only aligning from French to English with a very limited feature set, from the
+one week hack at the <a href="http://statmt.org/mtm13">Eighth MT Marathon 2013</a>. Please feel free to check
+out the code, read to the bottom of this page, and
+<a href="http://www.cs.jhu.edu/~xuchen/">send the author an email</a> if you want to add more language pairs to
+it.</p>
+
+<h2 id="build">Build</h2>
+
+<p>jacana-xy is written in a mixture of Java and Scala. If you build from ant, you have to set up the
+environmental variables <code class="highlighter-rouge">JAVA_HOME</code> and <code class="highlighter-rouge">SCALA_HOME</code>. In my system, I have:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.26
+export SCALA_HOME=/home/xuchen/Downloads/scala-2.10.2
+</code></pre>
+</div>
+
+<p>Then type:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>ant
+</code></pre>
+</div>
+
+<p>build/lib/jacana-xy.jar will be built for you.</p>
+
+<p>If you build from Eclipse, first install scala-ide, then import the whole jacana folder as a Scala project. Eclipse should find the .project file and set up the project automatically for you.</p>
+
+<p>Demo
+scripts-align/runDemoServer.sh shows up the web demo. Direct your browser to http://localhost:8080/ and you should be able to align some sentences.</p>
+
+<p>Note: To make jacana-xy know where to look for resource files, pass the property JACANA_HOME with Java when you run it:</p>
+
+<p>java -DJACANA_HOME=/path/to/jacana -cp jacana-xy.jar ……</p>
+
+<p>Browser
+You can also browse one or two alignment files (*.json) with firefox opening src/web/AlignmentBrowser.html:</p>
+
+<p>Note 1: due to strict security setting for accessing local files, Chrome/IE won’t work.</p>
+
+<p>Note 2: the input *.json files have to be in the same folder with AlignmentBrowser.html.</p>
+
+<p>Align
+scripts-align/alignFile.sh aligns tab-separated sentence files and outputs the output to a .json file that’s accepted by the browser:</p>
+
+<p>java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -src fr -tgt en -m fr-en.model -a s.txt -o s.json</p>
+
+<p>scripts-align/alignFile.sh takes GIZA++-style input files (one file containing the source sentences, and the other file the target sentences) and outputs to one .align file with dashed alignment indices (e.g. “1-2 0-4”):</p>
+
+<p>java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -m fr-en.model -src fr -tgt en -a s1.txt -b s2.txt -o s.align</p>
+
+<p>Training
+java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -r train.json -d dev.json -t test.json -m /tmp/align.model</p>
+
+<p>The aligner then would train on train.json, and report F1 values on dev.json for every 10 iterations, when the stopping criterion has reached, it will test on test.json.</p>
+
+<p>For every 10 iterations, a model file is saved to (in this example) /tmp/align.model.iter_XX.F1_XX.X. Normally what I do is to select the one with the best F1 on dev.json, then run a final test on test.json:</p>
+
+<p>java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -t test.json -m /tmp/align.model.iter_XX.F1_XX.X</p>
+
+<p>In this case since the training data is missing, the aligner assumes it’s a test job, then reads model file still from the -m option, and test on test.json.</p>
+
+<p>All the json files are in a format like the following (also accepted by the browser for display):</p>
+
+<p>[
+    {
+        “id”: “0008”,
+        “name”: “Hansards.french-english.0008”,
+        “possibleAlign”: “0-0 0-1 0-2”,
+        “source”: “bravo !”,
+        “sureAlign”: “1-3”,
+        “target”: “hear , hear !”
+    },
+    {
+        “id”: “0009”,
+        “name”: “Hansards.french-english.0009”,
+        “possibleAlign”: “1-1 6-5 7-5 6-6 7-6 13-10 13-11”,
+        “source”: “monsieur le Orateur , ma question se adresse à le ministre chargé de les transports .”,
+        “sureAlign”: “0-0 2-1 3-2 4-3 5-4 8-7 9-8 10-9 12-10 14-11 15-12”,
+        “target”: “Mr. Speaker , my question is directed to the Minister of Transport .”
+    }
+]
+Where possibleAlign is not used.</p>
+
+<p>The stopping criterion is to run up to 300 iterations or when the objective difference between two iterations is less than 0.001, whichever happens first. Currently they are hard-coded. If you need to be flexible on this, send me an email!</p>
+
+<p>Support More Languages
+To add support to more languages, you need:</p>
+
+<p>labelled word alignment (in the download there’s already French-English under alignment-data/fr-en; I also have Chinese-English and Arabic-English; let me know if you have more). Usually 100 labelled sentence pairs would be enough
+implement some feature functions for this language pair
+To add more features, you need to implement the following interface:</p>
+
+<p>edu.jhu.jacana.align.feature.AlignFeature</p>
+
+<p>and override the following function:</p>
+
+<p>addPhraseBasedFeature</p>
+
+<p>For instance, a simple feature that checks whether the two words are translations in wiktionary for the French-English alignment task has the function implemented as:</p>
+
+<p>def addPhraseBasedFeature(pair: AlignPair, ins:AlignFeatureVector, i:Int, srcSpan:Int, j:Int, tgtSpan:Int,
+      currState:Int, featureAlphabet: Alphabet){
+  if (j == -1) {
+  } else {
+    val srcTokens = pair.srcTokens.slice(i, i+srcSpan).mkString(“ “)
+    val tgtTokens = pair.tgtTokens.slice(j, j+tgtSpan).mkString(“ “)</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>if (WiktionaryMultilingual.exists(srcTokens, tgtTokens)) {
+  ins.addFeature("InWiktionary", NONE_STATE, currState, 1.0, srcSpan, featureAlphabet) 
+}
+</code></pre>
+</div>
+
+<p>}     <br />
+}
+This is a more general function that also deals with phrase alignment. But it is suggested to implement it just for token alignment as currently the phrase alignment part is very slow to train (60x slower than token alignment).</p>
+
+<p>Some other language-independent and English-only features are implemented under the package edu.jhu.jacana.align.feature, for instance:</p>
+
+<p>StringSimilarityAlignFeature: various string similarity measures</p>
+
+<p>PositionalAlignFeature: features based on relative sentence positions</p>
+
+<p>DistortionAlignFeature: Markovian (state transition) features</p>
+
+<p>When you add features for more languages, just create a new package like the one for French-English:</p>
+
+<p>edu.jhu.jacana.align.feature.fr_en</p>
+
+<p>and start coding!</p>
+
+
+
+          <!--   <h4 class="blog-post-title">Welcome to Joshua!</h4> -->
+
+          <!--   <p>This blog post shows a few different types of content that's supported and styled with Bootstrap. Basic typography, images, and code are all supported.</p> -->
+          <!--   <hr> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis <a href="#">dis parturient montes</a>, nascetur ridiculus mus. Aenean eu leo quam. Pellentesque ornare sem lacinia quam venenatis vestibulum. Sed posuere consectetur est at lobortis. Cras mattis consectetur purus sit amet fermentum.</p> -->
+          <!--   <blockquote> -->
+          <!--     <p>Curabitur blandit tempus porttitor. <strong>Nullam quis risus eget urna mollis</strong> ornare vel eu leo. Nullam id dolor id nibh ultricies vehicula ut id elit.</p> -->
+          <!--   </blockquote> -->
+          <!--   <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet fermentum. Aenean lacinia bibendum nulla sed consectetur.</p> -->
+          <!--   <h2>Heading</h2> -->
+          <!--   <p>Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Duis mollis, est non commodo luctus, nisi erat porttitor ligula, eget lacinia odio sem nec elit. Morbi leo risus, porta ac consectetur ac, vestibulum at eros.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.</p> -->
+          <!--   <pre><code>Example code block</code></pre> -->
+          <!--   <p>Aenean lacinia bibendum nulla sed consectetur. Etiam porta sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aenean lacinia bibendum nulla sed consectetur. Etiam porta sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> -->
+          <!--   <ul> -->
+          <!--     <li>Praesent commodo cursus magna, vel scelerisque nisl consectetur et.</li> -->
+          <!--     <li>Donec id elit non mi porta gravida at eget metus.</li> -->
+          <!--     <li>Nulla vitae elit libero, a pharetra augue.</li> -->
+          <!--   </ul> -->
+          <!--   <p>Donec ullamcorper nulla non metus auctor fringilla. Nulla vitae elit libero, a pharetra augue.</p> -->
+          <!--   <ol> -->
+          <!--     <li>Vestibulum id ligula porta felis euismod semper.</li> -->
+          <!--     <li>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.</li> -->
+          <!--     <li>Maecenas sed diam eget risus varius blandit sit amet non magna.</li> -->
+          <!--   </ol> -->
+          <!--   <p>Cras mattis consectetur purus sit amet fermentum. Sed posuere consectetur est at lobortis.</p> -->
+          <!-- </div><\!-- /.blog-post -\-> -->
+
+        </div>
+
+      </div><!-- /.row -->
+
+      
+        
+    </div><!-- /.container -->
+
+    <!-- Bootstrap core JavaScript
+    ================================================== -->
+    <!-- Placed at the end of the document so the pages load faster -->
+    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
+    <script src="../../dist/js/bootstrap.min.js"></script>
+    <!-- <script src="../../assets/js/docs.min.js"></script> -->
+    <!-- IE10 viewport hack for Surface/desktop Windows 8 bug -->
+    <!-- <script src="../../assets/js/ie10-viewport-bug-workaround.js"></script>
+    -->
+
+    <!-- Start of StatCounter Code for Default Guide -->
+    <script type="text/javascript">
+      var sc_project=8264132; 
+      var sc_invisible=1; 
+      var sc_security="4b97fe2d"; 
+    </script>
+    <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>
+    <noscript>
+      <div class="statcounter">
+        <a title="hit counter joomla" 
+           href="http://statcounter.com/joomla/"
+           target="_blank">
+          <img class="statcounter"
+               src="http://c.statcounter.com/8264132/0/4b97fe2d/1/"
+               alt="hit counter joomla" />
+        </a>
+      </div>
+    </noscript>
+    <!-- End of StatCounter Code for Default Guide -->
+  </body>
+</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/53cc3005/6.0/jacana.md
----------------------------------------------------------------------
diff --git a/6.0/jacana.md b/6.0/jacana.md
deleted file mode 100644
index 71c1753..0000000
--- a/6.0/jacana.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-layout: default6
-title: Alignment with Jacana
----
-
-## Introduction
-
-jacana-xy is a token-based word aligner for machine translation, adapted from the original
-English-English word aligner jacana-align described in the following paper:
-
-    A Lightweight and High Performance Monolingual Word Aligner. Xuchen Yao, Benjamin Van Durme,
-    Chris Callison-Burch and Peter Clark. Proceedings of ACL 2013, short papers.
-
-It currently supports only aligning from French to English with a very limited feature set, from the
-one week hack at the [Eighth MT Marathon 2013](http://statmt.org/mtm13). Please feel free to check
-out the code, read to the bottom of this page, and
-[send the author an email](http://www.cs.jhu.edu/~xuchen/) if you want to add more language pairs to
-it.
-
-## Build
-
-jacana-xy is written in a mixture of Java and Scala. If you build from ant, you have to set up the
-environmental variables `JAVA_HOME` and `SCALA_HOME`. In my system, I have:
-
-    export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.26
-    export SCALA_HOME=/home/xuchen/Downloads/scala-2.10.2
-
-Then type:
-
-    ant
-
-build/lib/jacana-xy.jar will be built for you.
-
-If you build from Eclipse, first install scala-ide, then import the whole jacana folder as a Scala project. Eclipse should find the .project file and set up the project automatically for you.
-
-Demo
-scripts-align/runDemoServer.sh shows up the web demo. Direct your browser to http://localhost:8080/ and you should be able to align some sentences.
-
-Note: To make jacana-xy know where to look for resource files, pass the property JACANA_HOME with Java when you run it:
-
-java -DJACANA_HOME=/path/to/jacana -cp jacana-xy.jar ......
-
-Browser
-You can also browse one or two alignment files (*.json) with firefox opening src/web/AlignmentBrowser.html:
-
-
-
-Note 1: due to strict security setting for accessing local files, Chrome/IE won't work.
-
-Note 2: the input *.json files have to be in the same folder with AlignmentBrowser.html.
-
-Align
-scripts-align/alignFile.sh aligns tab-separated sentence files and outputs the output to a .json file that's accepted by the browser:
-
-java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -src fr -tgt en -m fr-en.model -a s.txt -o s.json
-
-scripts-align/alignFile.sh takes GIZA++-style input files (one file containing the source sentences, and the other file the target sentences) and outputs to one .align file with dashed alignment indices (e.g. "1-2 0-4"):
-
-java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -m fr-en.model -src fr -tgt en -a s1.txt -b s2.txt -o s.align
-
-Training
-java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -r train.json -d dev.json -t test.json -m /tmp/align.model
-
-The aligner then would train on train.json, and report F1 values on dev.json for every 10 iterations, when the stopping criterion has reached, it will test on test.json.
-
-For every 10 iterations, a model file is saved to (in this example) /tmp/align.model.iter_XX.F1_XX.X. Normally what I do is to select the one with the best F1 on dev.json, then run a final test on test.json:
-
-java -DJACANA_HOME=../ -jar ../build/lib/jacana-xy.jar -t test.json -m /tmp/align.model.iter_XX.F1_XX.X
-
-In this case since the training data is missing, the aligner assumes it's a test job, then reads model file still from the -m option, and test on test.json.
-
-All the json files are in a format like the following (also accepted by the browser for display):
-
-[
-    {
-        "id": "0008",
-        "name": "Hansards.french-english.0008",
-        "possibleAlign": "0-0 0-1 0-2",
-        "source": "bravo !",
-        "sureAlign": "1-3",
-        "target": "hear , hear !"
-    },
-    {
-        "id": "0009",
-        "name": "Hansards.french-english.0009",
-        "possibleAlign": "1-1 6-5 7-5 6-6 7-6 13-10 13-11",
-        "source": "monsieur le Orateur , ma question se adresse à le ministre chargé de les transports .",
-        "sureAlign": "0-0 2-1 3-2 4-3 5-4 8-7 9-8 10-9 12-10 14-11 15-12",
-        "target": "Mr. Speaker , my question is directed to the Minister of Transport ."
-    }
-]
-Where possibleAlign is not used.
-
-The stopping criterion is to run up to 300 iterations or when the objective difference between two iterations is less than 0.001, whichever happens first. Currently they are hard-coded. If you need to be flexible on this, send me an email!
-
-Support More Languages
-To add support to more languages, you need:
-
-labelled word alignment (in the download there's already French-English under alignment-data/fr-en; I also have Chinese-English and Arabic-English; let me know if you have more). Usually 100 labelled sentence pairs would be enough
-implement some feature functions for this language pair
-To add more features, you need to implement the following interface:
-
-edu.jhu.jacana.align.feature.AlignFeature
-
-and override the following function:
-
-addPhraseBasedFeature
-
-For instance, a simple feature that checks whether the two words are translations in wiktionary for the French-English alignment task has the function implemented as:
-
-def addPhraseBasedFeature(pair: AlignPair, ins:AlignFeatureVector, i:Int, srcSpan:Int, j:Int, tgtSpan:Int,
-      currState:Int, featureAlphabet: Alphabet){
-  if (j == -1) {
-  } else {
-    val srcTokens = pair.srcTokens.slice(i, i+srcSpan).mkString(" ")
-    val tgtTokens = pair.tgtTokens.slice(j, j+tgtSpan).mkString(" ")
-                
-    if (WiktionaryMultilingual.exists(srcTokens, tgtTokens)) {
-      ins.addFeature("InWiktionary", NONE_STATE, currState, 1.0, srcSpan, featureAlphabet) 
-    }
-        
-  }       
-}
-This is a more general function that also deals with phrase alignment. But it is suggested to implement it just for token alignment as currently the phrase alignment part is very slow to train (60x slower than token alignment).
-
-Some other language-independent and English-only features are implemented under the package edu.jhu.jacana.align.feature, for instance:
-
-StringSimilarityAlignFeature: various string similarity measures
-
-PositionalAlignFeature: features based on relative sentence positions
-
-DistortionAlignFeature: Markovian (state transition) features
-
-When you add features for more languages, just create a new package like the one for French-English:
-
-edu.jhu.jacana.align.feature.fr_en
-
-and start coding!
-

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/53cc3005/6.0/large-lms.html
----------------------------------------------------------------------
diff --git a/6.0/large-lms.html b/6.0/large-lms.html
new file mode 100644
index 0000000..edf4878
--- /dev/null
+++ b/6.0/large-lms.html
@@ -0,0 +1,390 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <meta name="description" content="">
+    <meta name="author" content="">
+    <link rel="icon" href="../../favicon.ico">
+
+    <title>Joshua Documentation | Building large LMs with SRILM</title>
+
+    <!-- Bootstrap core CSS -->
+    <link href="/dist/css/bootstrap.min.css" rel="stylesheet">
+
+    <!-- Custom styles for this template -->
+    <link href="/joshua6.css" rel="stylesheet">
+  </head>
+
+  <body>
+
+    <div class="blog-masthead">
+      <div class="container">
+        <nav class="blog-nav">
+          <!-- <a class="blog-nav-item active" href="#">Joshua</a> -->
+          <a class="blog-nav-item" href="/">Joshua</a>
+          <!-- <a class="blog-nav-item" href="/6.0/whats-new.html">New features</a> -->
+          <a class="blog-nav-item" href="/language-packs/">Language packs</a>
+          <a class="blog-nav-item" href="/data/">Datasets</a>
+          <a class="blog-nav-item" href="/support/">Support</a>
+          <a class="blog-nav-item" href="/contributors.html">Contributors</a>
+        </nav>
+      </div>
+    </div>
+
+    <div class="container">
+
+      <div class="row">
+
+        <div class="col-sm-2">
+          <div class="sidebar-module">
+            <!-- <h4>About</h4> -->
+            <center>
+            <img src="/images/joshua-logo-small.png" />
+            <p>Joshua machine translation toolkit</p>
+            </center>
+          </div>
+          <hr>
+          <center>
+            <a href="/releases/current/" target="_blank"><button class="button">Download Joshua 6.0.5</button></a>
+            <br />
+            <a href="/releases/runtime/" target="_blank"><button class="button">Runtime only version</button></a>
+            <p>Released November 5, 2015</p>
+          </center>
+          <hr>
+          <!-- <div class="sidebar-module"> -->
+          <!--   <span id="download"> -->
+          <!--     <a href="http://joshua-decoder.org/downloads/joshua-6.0.tgz">Download</a> -->
+          <!--   </span> -->
+          <!-- </div> -->
+          <div class="sidebar-module">
+            <h4>Using Joshua</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/install.html">Installation</a></li>
+              <li><a href="/6.0/quick-start.html">Quick Start</a></li>
+            </ol>
+          </div>
+          <hr>
+          <div class="sidebar-module">
+            <h4>Building new models</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/pipeline.html">Pipeline</a></li>
+              <li><a href="/6.0/tutorial.html">Tutorial</a></li>
+              <li><a href="/6.0/faq.html">FAQ</a></li>
+            </ol>
+          </div>
+<!--
+          <div class="sidebar-module">
+            <h4>Phrase-based</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/phrase.html">Training</a></li>
+            </ol>
+          </div>
+-->
+          <hr>
+          <div class="sidebar-module">
+            <h4>Advanced</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/bundle.html">Building language packs</a></li>
+              <li><a href="/6.0/decoder.html">Decoder options</a></li>
+              <li><a href="/6.0/file-formats.html">File formats</a></li>
+              <li><a href="/6.0/packing.html">Packing TMs</a></li>
+              <li><a href="/6.0/large-lms.html">Building large LMs</a></li>
+            </ol>
+          </div>
+
+          <hr> 
+          <div class="sidebar-module">
+            <h4>Developer</h4>
+            <ol class="list-unstyled">              
+		<li><a href="https://github.com/joshua-decoder/joshua">Github</a></li>
+		<li><a href="http://cs.jhu.edu/~post/joshua-docs">Javadoc</a></li>
+		<li><a href="https://groups.google.com/forum/?fromgroups#!forum/joshua_developers">Mailing list</a></li>              
+            </ol>
+          </div>
+
+        </div><!-- /.blog-sidebar -->
+
+        
+        <div class="col-sm-8 blog-main">
+        
+
+          <div class="blog-title">
+            <h2>Building large LMs with SRILM</h2>
+          </div>
+          
+          <div class="blog-post">
+
+            <p>The following is a tutorial for building a large language model from the
+English Gigaword Fifth Edition corpus
+<a href="http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2011T07">LDC2011T07</a>
+using SRILM. English text is provided from seven different sources.</p>
+
+<h3 id="step-0-clean-up-the-corpus">Step 0: Clean up the corpus</h3>
+
+<p>The Gigaword corpus has to be stripped of all SGML tags and tokenized.
+Instructions for performing those steps are not included in this
+documentation. A description of this process can be found in a paper
+called <a href="https://akbcwekex2012.files.wordpress.com/2012/05/28_paper.pdf">“Annotated
+Gigaword”</a>.</p>
+
+<p>The Joshua package ships with a script that converts all alphabetical
+characters to their lowercase equivalent. The script is located at
+<code class="highlighter-rouge">$JOSHUA/scripts/lowercase.perl</code>.</p>
+
+<p>Make a directory structure as follows:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>gigaword/
+├── corpus/
+│   ├── afp_eng/
+│   │   ├── afp_eng_199405.lc.gz
+│   │   ├── afp_eng_199406.lc.gz
+│   │   ├── ...
+│   │   └── counts/
+│   ├── apw_eng/
+│   │   ├── apw_eng_199411.lc.gz
+│   │   ├── apw_eng_199412.lc.gz
+│   │   ├── ...
+│   │   └── counts/
+│   ├── cna_eng/
+│   │   ├── ...
+│   │   └── counts/
+│   ├── ltw_eng/
+│   │   ├── ...
+│   │   └── counts/
+│   ├── nyt_eng/
+│   │   ├── ...
+│   │   └── counts/
+│   ├── wpb_eng/
+│   │   ├── ...
+│   │   └── counts/
+│   └── xin_eng/
+│       ├── ...
+│       └── counts/
+└── lm/
+    ├── afp_eng/
+    ├── apw_eng/
+    ├── cna_eng/
+    ├── ltw_eng/
+    ├── nyt_eng/
+    ├── wpb_eng/
+    └── xin_eng/
+</code></pre>
+</div>
+
+<p>The next step will be to build smaller LMs and then interpolate them into one
+file.</p>
+
+<h3 id="step-1-count-ngrams">Step 1: Count ngrams</h3>
+
+<p>Run the following script once from each source directory under the <code class="highlighter-rouge">corpus/</code>
+directory (edit it to specify the path to the <code class="highlighter-rouge">ngram-count</code> binary as well as
+the number of processors):</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code><span class="c">#!/bin/sh</span>
+
+<span class="nv">NGRAM_COUNT</span><span class="o">=</span><span class="nv">$SRILM_SRC</span>/bin/i686-m64/ngram-count
+<span class="nv">args</span><span class="o">=</span><span class="s2">""</span>
+
+<span class="k">for </span><span class="nb">source </span><span class="k">in</span> <span class="k">*</span>.gz; <span class="k">do
+   </span><span class="nv">args</span><span class="o">=</span><span class="nv">$args</span><span class="s2">"-sort -order 5 -text </span><span class="nv">$source</span><span class="s2"> -write counts/</span><span class="nv">$source</span><span class="s2">-counts.gz "</span>
+<span class="k">done
+
+</span><span class="nb">echo</span> <span class="nv">$args</span> | xargs --max-procs<span class="o">=</span>4 -n 7 <span class="nv">$NGRAM_COUNT</span>
+</code></pre>
+</div>
+
+<p>Then move each <code class="highlighter-rouge">counts/</code> directory to the corresponding directory under
+<code class="highlighter-rouge">lm/</code>. Now that each ngram has been counted, we can make a language
+model for each of the seven sources.</p>
+
+<h3 id="step-2-make-individual-language-models">Step 2: Make individual language models</h3>
+
+<p>SRILM includes a script, called <code class="highlighter-rouge">make-big-lm</code>, for building large language
+models under resource-limited environments. The manual for this script can be
+read online
+<a href="http://www-speech.sri.com/projects/srilm/manpages/training-scripts.1.html">here</a>.
+Since the Gigaword corpus is so large, it is convenient to use <code class="highlighter-rouge">make-big-lm</code>
+even in environments with many parallel processors and a lot of memory.</p>
+
+<p>Initiate the following script from each of the source directories under the
+<code class="highlighter-rouge">lm/</code> directory (edit it to specify the path to the <code class="highlighter-rouge">make-big-lm</code> script as
+well as the pruning threshold):</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
+<span class="nb">set</span> -x
+
+<span class="nv">CMD</span><span class="o">=</span><span class="nv">$SRILM_SRC</span>/bin/make-big-lm
+<span class="nv">PRUNE_THRESHOLD</span><span class="o">=</span>1e-8
+
+<span class="nv">$CMD</span> <span class="se">\</span>
+  -name gigalm <span class="sb">`</span><span class="k">for </span>k <span class="k">in </span>counts/<span class="k">*</span>.gz; <span class="k">do </span><span class="nb">echo</span> <span class="s2">" </span><span class="se">\</span><span class="s2">
+  -read </span><span class="nv">$k</span><span class="s2"> "</span>; <span class="k">done</span><span class="sb">`</span> <span class="se">\</span>
+  -lm lm.gz <span class="se">\</span>
+  -max-per-file 100000000 <span class="se">\</span>
+  -order 5 <span class="se">\</span>
+  -kndiscount <span class="se">\</span>
+  -interpolate <span class="se">\</span>
+  -unk <span class="se">\</span>
+  -prune <span class="nv">$PRUNE_THRESHOLD</span>
+</code></pre>
+</div>
+
+<p>The language model attributes chosen are the following:</p>
+
+<ul>
+  <li>N-grams up to order 5</li>
+  <li>Kneser-Ney smoothing</li>
+  <li>N-gram probability estimates at the specified order <em>n</em> are interpolated with
+lower-order estimates</li>
+  <li>include the unknown-word token as a regular word</li>
+  <li>pruning N-grams based on the specified threshold</li>
+</ul>
+
+<p>Next, we will mix the models together into a single file.</p>
+
+<h3 id="step-3-mix-models-together">Step 3: Mix models together</h3>
+
+<p>Using development text, interpolation weights can determined that give highest
+weight to the source language models that have the lowest perplexity on the
+specified development set.</p>
+
+<h4 id="step-3-1-determine-interpolation-weights">Step 3-1: Determine interpolation weights</h4>
+
+<p>Initiate the following script from the <code class="highlighter-rouge">lm/</code> directory (edit it to specify the
+path to the <code class="highlighter-rouge">ngram</code> binary as well as the path to the development text file):</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
+<span class="nb">set</span> -x
+
+<span class="nv">NGRAM</span><span class="o">=</span><span class="nv">$SRILM_SRC</span>/bin/i686-m64/ngram
+<span class="nv">DEV_TEXT</span><span class="o">=</span>~mpost/expts/wmt12/runs/es-en/data/tune/tune.tok.lc.es
+
+<span class="nb">dirs</span><span class="o">=(</span> afp_eng apw_eng cna_eng ltw_eng nyt_eng wpb_eng xin_eng <span class="o">)</span>
+
+<span class="k">for </span>d <span class="k">in</span> <span class="k">${</span><span class="nv">dirs</span><span class="p">[@]</span><span class="k">}</span> ; <span class="k">do</span>
+  <span class="nv">$NGRAM</span> -debug 2 -order 5 -unk -lm <span class="nv">$d</span>/lm.gz -ppl <span class="nv">$DEV_TEXT</span> &gt; <span class="nv">$d</span>/lm.ppl ;
+<span class="k">done
+
+</span>compute-best-mix <span class="k">*</span>/lm.ppl &gt; best-mix.ppl
+</code></pre>
+</div>
+
+<p>Take a look at the contents of <code class="highlighter-rouge">best-mix.ppl</code>. It will contain a sequence of
+values in parenthesis. These are the interpolation weights of the source
+language models in the order specified. Copy and paste the values within the
+parenthesis into the script below.</p>
+
+<h4 id="step-3-2-combine-the-models">Step 3-2: Combine the models</h4>
+
+<p>Initiate the following script from the <code class="highlighter-rouge">lm/</code> directory (edit it to specify the
+path to the <code class="highlighter-rouge">ngram</code> binary as well as the interpolation weights):</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
+<span class="nb">set</span> -x
+
+<span class="nv">NGRAM</span><span class="o">=</span><span class="nv">$SRILM_SRC</span>/bin/i686-m64/ngram
+<span class="nv">DIRS</span><span class="o">=(</span>   afp_eng    apw_eng     cna_eng  ltw_eng   nyt_eng  wpb_eng  xin_eng <span class="o">)</span>
+<span class="nv">LAMBDAS</span><span class="o">=(</span>0.00631272 0.000647602 0.251555 0.0134726 0.348953 0.371566 0.00749238<span class="o">)</span>
+
+<span class="nv">$NGRAM</span> -order 5 -unk <span class="se">\</span>
+  -lm      <span class="k">${</span><span class="nv">DIRS</span><span class="p">[0]</span><span class="k">}</span>/lm.gz     -lambda  <span class="k">${</span><span class="nv">LAMBDAS</span><span class="p">[0]</span><span class="k">}</span> <span class="se">\</span>
+  -mix-lm  <span class="k">${</span><span class="nv">DIRS</span><span class="p">[1]</span><span class="k">}</span>/lm.gz <span class="se">\</span>
+  -mix-lm2 <span class="k">${</span><span class="nv">DIRS</span><span class="p">[2]</span><span class="k">}</span>/lm.gz -mix-lambda2 <span class="k">${</span><span class="nv">LAMBDAS</span><span class="p">[2]</span><span class="k">}</span> <span class="se">\</span>
+  -mix-lm3 <span class="k">${</span><span class="nv">DIRS</span><span class="p">[3]</span><span class="k">}</span>/lm.gz -mix-lambda3 <span class="k">${</span><span class="nv">LAMBDAS</span><span class="p">[3]</span><span class="k">}</span> <span class="se">\</span>
+  -mix-lm4 <span class="k">${</span><span class="nv">DIRS</span><span class="p">[4]</span><span class="k">}</span>/lm.gz -mix-lambda4 <span class="k">${</span><span class="nv">LAMBDAS</span><span class="p">[4]</span><span class="k">}</span> <span class="se">\</span>
+  -mix-lm5 <span class="k">${</span><span class="nv">DIRS</span><span class="p">[5]</span><span class="k">}</span>/lm.gz -mix-lambda5 <span class="k">${</span><span class="nv">LAMBDAS</span><span class="p">[5]</span><span class="k">}</span> <span class="se">\</span>
+  -mix-lm6 <span class="k">${</span><span class="nv">DIRS</span><span class="p">[6]</span><span class="k">}</span>/lm.gz -mix-lambda6 <span class="k">${</span><span class="nv">LAMBDAS</span><span class="p">[6]</span><span class="k">}</span> <span class="se">\</span>
+  -write-lm mixed_lm.gz
+</code></pre>
+</div>
+
+<p>The resulting file, <code class="highlighter-rouge">mixed_lm.gz</code> is a language model based on all the text in
+the Gigaword corpus and with some probabilities biased to the development text
+specify in step 3-1. It is in the ARPA format. The optional next step converts
+it into KenLM format.</p>
+
+<h4 id="step-3-3-convert-to-kenlm">Step 3-3: Convert to KenLM</h4>
+
+<p>The KenLM format has some speed advantages over the ARPA format. Issuing the
+following command will write a new language model file <code class="highlighter-rouge">mixed_lm-kenlm.gz</code> that
+is the <code class="highlighter-rouge">mixed_lm.gz</code> language model transformed into the KenLM format.</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>$JOSHUA/src/joshua/decoder/ff/lm/kenlm/build_binary mixed_lm.gz mixed_lm.kenlm
+</code></pre>
+</div>
+
+
+
+          <!--   <h4 class="blog-post-title">Welcome to Joshua!</h4> -->
+
+          <!--   <p>This blog post shows a few different types of content that's supported and styled with Bootstrap. Basic typography, images, and code are all supported.</p> -->
+          <!--   <hr> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis <a href="#">dis parturient montes</a>, nascetur ridiculus mus. Aenean eu leo quam. Pellentesque ornare sem lacinia quam venenatis vestibulum. Sed posuere consectetur est at lobortis. Cras mattis consectetur purus sit amet fermentum.</p> -->
+          <!--   <blockquote> -->
+          <!--     <p>Curabitur blandit tempus porttitor. <strong>Nullam quis risus eget urna mollis</strong> ornare vel eu leo. Nullam id dolor id nibh ultricies vehicula ut id elit.</p> -->
+          <!--   </blockquote> -->
+          <!--   <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet fermentum. Aenean lacinia bibendum nulla sed consectetur.</p> -->
+          <!--   <h2>Heading</h2> -->
+          <!--   <p>Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Duis mollis, est non commodo luctus, nisi erat porttitor ligula, eget lacinia odio sem nec elit. Morbi leo risus, porta ac consectetur ac, vestibulum at eros.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.</p> -->
+          <!--   <pre><code>Example code block</code></pre> -->
+          <!--   <p>Aenean lacinia bibendum nulla sed consectetur. Etiam porta sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aenean lacinia bibendum nulla sed consectetur. Etiam porta sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> -->
+          <!--   <ul> -->
+          <!--     <li>Praesent commodo cursus magna, vel scelerisque nisl consectetur et.</li> -->
+          <!--     <li>Donec id elit non mi porta gravida at eget metus.</li> -->
+          <!--     <li>Nulla vitae elit libero, a pharetra augue.</li> -->
+          <!--   </ul> -->
+          <!--   <p>Donec ullamcorper nulla non metus auctor fringilla. Nulla vitae elit libero, a pharetra augue.</p> -->
+          <!--   <ol> -->
+          <!--     <li>Vestibulum id ligula porta felis euismod semper.</li> -->
+          <!--     <li>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.</li> -->
+          <!--     <li>Maecenas sed diam eget risus varius blandit sit amet non magna.</li> -->
+          <!--   </ol> -->
+          <!--   <p>Cras mattis consectetur purus sit amet fermentum. Sed posuere consectetur est at lobortis.</p> -->
+          <!-- </div><\!-- /.blog-post -\-> -->
+
+        </div>
+
+      </div><!-- /.row -->
+
+      
+        
+    </div><!-- /.container -->
+
+    <!-- Bootstrap core JavaScript
+    ================================================== -->
+    <!-- Placed at the end of the document so the pages load faster -->
+    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
+    <script src="../../dist/js/bootstrap.min.js"></script>
+    <!-- <script src="../../assets/js/docs.min.js"></script> -->
+    <!-- IE10 viewport hack for Surface/desktop Windows 8 bug -->
+    <!-- <script src="../../assets/js/ie10-viewport-bug-workaround.js"></script>
+    -->
+
+    <!-- Start of StatCounter Code for Default Guide -->
+    <script type="text/javascript">
+      var sc_project=8264132; 
+      var sc_invisible=1; 
+      var sc_security="4b97fe2d"; 
+    </script>
+    <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>
+    <noscript>
+      <div class="statcounter">
+        <a title="hit counter joomla" 
+           href="http://statcounter.com/joomla/"
+           target="_blank">
+          <img class="statcounter"
+               src="http://c.statcounter.com/8264132/0/4b97fe2d/1/"
+               alt="hit counter joomla" />
+        </a>
+      </div>
+    </noscript>
+    <!-- End of StatCounter Code for Default Guide -->
+  </body>
+</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/53cc3005/6.0/large-lms.md
----------------------------------------------------------------------
diff --git a/6.0/large-lms.md b/6.0/large-lms.md
deleted file mode 100644
index a6792dd..0000000
--- a/6.0/large-lms.md
+++ /dev/null
@@ -1,192 +0,0 @@
----
-layout: default6
-title: Building large LMs with SRILM
-category: advanced
----
-
-The following is a tutorial for building a large language model from the
-English Gigaword Fifth Edition corpus
-[LDC2011T07](http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2011T07)
-using SRILM. English text is provided from seven different sources.
-
-### Step 0: Clean up the corpus
-
-The Gigaword corpus has to be stripped of all SGML tags and tokenized.
-Instructions for performing those steps are not included in this
-documentation. A description of this process can be found in a paper
-called ["Annotated
-Gigaword"](https://akbcwekex2012.files.wordpress.com/2012/05/28_paper.pdf).
-
-The Joshua package ships with a script that converts all alphabetical
-characters to their lowercase equivalent. The script is located at
-`$JOSHUA/scripts/lowercase.perl`.
-
-Make a directory structure as follows:
-
-    gigaword/
-    ├── corpus/
-    │   ├── afp_eng/
-    │   │   ├── afp_eng_199405.lc.gz
-    │   │   ├── afp_eng_199406.lc.gz
-    │   │   ├── ...
-    │   │   └── counts/
-    │   ├── apw_eng/
-    │   │   ├── apw_eng_199411.lc.gz
-    │   │   ├── apw_eng_199412.lc.gz
-    │   │   ├── ...
-    │   │   └── counts/
-    │   ├── cna_eng/
-    │   │   ├── ...
-    │   │   └── counts/
-    │   ├── ltw_eng/
-    │   │   ├── ...
-    │   │   └── counts/
-    │   ├── nyt_eng/
-    │   │   ├── ...
-    │   │   └── counts/
-    │   ├── wpb_eng/
-    │   │   ├── ...
-    │   │   └── counts/
-    │   └── xin_eng/
-    │       ├── ...
-    │       └── counts/
-    └── lm/
-        ├── afp_eng/
-        ├── apw_eng/
-        ├── cna_eng/
-        ├── ltw_eng/
-        ├── nyt_eng/
-        ├── wpb_eng/
-        └── xin_eng/
-
-
-The next step will be to build smaller LMs and then interpolate them into one
-file.
-
-### Step 1: Count ngrams
-
-Run the following script once from each source directory under the `corpus/`
-directory (edit it to specify the path to the `ngram-count` binary as well as
-the number of processors):
-
-    #!/bin/sh
-
-    NGRAM_COUNT=$SRILM_SRC/bin/i686-m64/ngram-count
-    args=""
-
-    for source in *.gz; do
-       args=$args"-sort -order 5 -text $source -write counts/$source-counts.gz "
-    done
-
-    echo $args | xargs --max-procs=4 -n 7 $NGRAM_COUNT
-
-Then move each `counts/` directory to the corresponding directory under
-`lm/`. Now that each ngram has been counted, we can make a language
-model for each of the seven sources.
-
-### Step 2: Make individual language models
-
-SRILM includes a script, called `make-big-lm`, for building large language
-models under resource-limited environments. The manual for this script can be
-read online
-[here](http://www-speech.sri.com/projects/srilm/manpages/training-scripts.1.html).
-Since the Gigaword corpus is so large, it is convenient to use `make-big-lm`
-even in environments with many parallel processors and a lot of memory.
-
-Initiate the following script from each of the source directories under the
-`lm/` directory (edit it to specify the path to the `make-big-lm` script as
-well as the pruning threshold):
-
-    #!/bin/bash
-    set -x
-
-    CMD=$SRILM_SRC/bin/make-big-lm
-    PRUNE_THRESHOLD=1e-8
-
-    $CMD \
-      -name gigalm `for k in counts/*.gz; do echo " \
-      -read $k "; done` \
-      -lm lm.gz \
-      -max-per-file 100000000 \
-      -order 5 \
-      -kndiscount \
-      -interpolate \
-      -unk \
-      -prune $PRUNE_THRESHOLD
-
-The language model attributes chosen are the following:
-
-* N-grams up to order 5
-* Kneser-Ney smoothing
-* N-gram probability estimates at the specified order *n* are interpolated with
-  lower-order estimates
-* include the unknown-word token as a regular word
-* pruning N-grams based on the specified threshold
-
-Next, we will mix the models together into a single file.
-
-### Step 3: Mix models together
-
-Using development text, interpolation weights can determined that give highest
-weight to the source language models that have the lowest perplexity on the
-specified development set.
-
-#### Step 3-1: Determine interpolation weights
-
-Initiate the following script from the `lm/` directory (edit it to specify the
-path to the `ngram` binary as well as the path to the development text file):
-
-    #!/bin/bash
-    set -x
-
-    NGRAM=$SRILM_SRC/bin/i686-m64/ngram
-    DEV_TEXT=~mpost/expts/wmt12/runs/es-en/data/tune/tune.tok.lc.es
-
-    dirs=( afp_eng apw_eng cna_eng ltw_eng nyt_eng wpb_eng xin_eng )
-
-    for d in ${dirs[@]} ; do
-      $NGRAM -debug 2 -order 5 -unk -lm $d/lm.gz -ppl $DEV_TEXT > $d/lm.ppl ;
-    done
-
-    compute-best-mix */lm.ppl > best-mix.ppl
-
-Take a look at the contents of `best-mix.ppl`. It will contain a sequence of
-values in parenthesis. These are the interpolation weights of the source
-language models in the order specified. Copy and paste the values within the
-parenthesis into the script below.
-
-#### Step 3-2: Combine the models
-
-Initiate the following script from the `lm/` directory (edit it to specify the
-path to the `ngram` binary as well as the interpolation weights):
-
-    #!/bin/bash
-    set -x
-
-    NGRAM=$SRILM_SRC/bin/i686-m64/ngram
-    DIRS=(   afp_eng    apw_eng     cna_eng  ltw_eng   nyt_eng  wpb_eng  xin_eng )
-    LAMBDAS=(0.00631272 0.000647602 0.251555 0.0134726 0.348953 0.371566 0.00749238)
-
-    $NGRAM -order 5 -unk \
-      -lm      ${DIRS[0]}/lm.gz     -lambda  ${LAMBDAS[0]} \
-      -mix-lm  ${DIRS[1]}/lm.gz \
-      -mix-lm2 ${DIRS[2]}/lm.gz -mix-lambda2 ${LAMBDAS[2]} \
-      -mix-lm3 ${DIRS[3]}/lm.gz -mix-lambda3 ${LAMBDAS[3]} \
-      -mix-lm4 ${DIRS[4]}/lm.gz -mix-lambda4 ${LAMBDAS[4]} \
-      -mix-lm5 ${DIRS[5]}/lm.gz -mix-lambda5 ${LAMBDAS[5]} \
-      -mix-lm6 ${DIRS[6]}/lm.gz -mix-lambda6 ${LAMBDAS[6]} \
-      -write-lm mixed_lm.gz
-
-The resulting file, `mixed_lm.gz` is a language model based on all the text in
-the Gigaword corpus and with some probabilities biased to the development text
-specify in step 3-1. It is in the ARPA format. The optional next step converts
-it into KenLM format.
-
-#### Step 3-3: Convert to KenLM
-
-The KenLM format has some speed advantages over the ARPA format. Issuing the
-following command will write a new language model file `mixed_lm-kenlm.gz` that
-is the `mixed_lm.gz` language model transformed into the KenLM format.
-
-    $JOSHUA/src/joshua/decoder/ff/lm/kenlm/build_binary mixed_lm.gz mixed_lm.kenlm
-

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/53cc3005/6.0/packing.html
----------------------------------------------------------------------
diff --git a/6.0/packing.html b/6.0/packing.html
new file mode 100644
index 0000000..647dd68
--- /dev/null
+++ b/6.0/packing.html
@@ -0,0 +1,277 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <meta name="description" content="">
+    <meta name="author" content="">
+    <link rel="icon" href="../../favicon.ico">
+
+    <title>Joshua Documentation | Grammar Packing</title>
+
+    <!-- Bootstrap core CSS -->
+    <link href="/dist/css/bootstrap.min.css" rel="stylesheet">
+
+    <!-- Custom styles for this template -->
+    <link href="/joshua6.css" rel="stylesheet">
+  </head>
+
+  <body>
+
+    <div class="blog-masthead">
+      <div class="container">
+        <nav class="blog-nav">
+          <!-- <a class="blog-nav-item active" href="#">Joshua</a> -->
+          <a class="blog-nav-item" href="/">Joshua</a>
+          <!-- <a class="blog-nav-item" href="/6.0/whats-new.html">New features</a> -->
+          <a class="blog-nav-item" href="/language-packs/">Language packs</a>
+          <a class="blog-nav-item" href="/data/">Datasets</a>
+          <a class="blog-nav-item" href="/support/">Support</a>
+          <a class="blog-nav-item" href="/contributors.html">Contributors</a>
+        </nav>
+      </div>
+    </div>
+
+    <div class="container">
+
+      <div class="row">
+
+        <div class="col-sm-2">
+          <div class="sidebar-module">
+            <!-- <h4>About</h4> -->
+            <center>
+            <img src="/images/joshua-logo-small.png" />
+            <p>Joshua machine translation toolkit</p>
+            </center>
+          </div>
+          <hr>
+          <center>
+            <a href="/releases/current/" target="_blank"><button class="button">Download Joshua 6.0.5</button></a>
+            <br />
+            <a href="/releases/runtime/" target="_blank"><button class="button">Runtime only version</button></a>
+            <p>Released November 5, 2015</p>
+          </center>
+          <hr>
+          <!-- <div class="sidebar-module"> -->
+          <!--   <span id="download"> -->
+          <!--     <a href="http://joshua-decoder.org/downloads/joshua-6.0.tgz">Download</a> -->
+          <!--   </span> -->
+          <!-- </div> -->
+          <div class="sidebar-module">
+            <h4>Using Joshua</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/install.html">Installation</a></li>
+              <li><a href="/6.0/quick-start.html">Quick Start</a></li>
+            </ol>
+          </div>
+          <hr>
+          <div class="sidebar-module">
+            <h4>Building new models</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/pipeline.html">Pipeline</a></li>
+              <li><a href="/6.0/tutorial.html">Tutorial</a></li>
+              <li><a href="/6.0/faq.html">FAQ</a></li>
+            </ol>
+          </div>
+<!--
+          <div class="sidebar-module">
+            <h4>Phrase-based</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/phrase.html">Training</a></li>
+            </ol>
+          </div>
+-->
+          <hr>
+          <div class="sidebar-module">
+            <h4>Advanced</h4>
+            <ol class="list-unstyled">
+              <li><a href="/6.0/bundle.html">Building language packs</a></li>
+              <li><a href="/6.0/decoder.html">Decoder options</a></li>
+              <li><a href="/6.0/file-formats.html">File formats</a></li>
+              <li><a href="/6.0/packing.html">Packing TMs</a></li>
+              <li><a href="/6.0/large-lms.html">Building large LMs</a></li>
+            </ol>
+          </div>
+
+          <hr> 
+          <div class="sidebar-module">
+            <h4>Developer</h4>
+            <ol class="list-unstyled">              
+		<li><a href="https://github.com/joshua-decoder/joshua">Github</a></li>
+		<li><a href="http://cs.jhu.edu/~post/joshua-docs">Javadoc</a></li>
+		<li><a href="https://groups.google.com/forum/?fromgroups#!forum/joshua_developers">Mailing list</a></li>              
+            </ol>
+          </div>
+
+        </div><!-- /.blog-sidebar -->
+
+        
+        <div class="col-sm-8 blog-main">
+        
+
+          <div class="blog-title">
+            <h2>Grammar Packing</h2>
+          </div>
+          
+          <div class="blog-post">
+
+            <p>Grammar packing refers to the process of taking a textual grammar
+output by <a href="thrax.html">Thrax</a> (or Moses, for phrase-based models) and
+efficiently encoding it so that it can be loaded
+<a href="https://aclweb.org/anthology/W/W12/W12-3134.pdf">very quickly</a> —
+packing the grammar results in significantly faster load times for
+very large grammars.  Packing is done automatically by the
+<a href="pipeline.html">Joshua pipeline</a>, but you can also run the packer
+manually.</p>
+
+<p>The script can be found at
+<code class="highlighter-rouge">$JOSHUA/scripts/support/grammar-packer.pl</code>. See that script for
+example usage. You can then add it to a Joshua config file, simply
+replacing a <code class="highlighter-rouge">tm</code> path to the compressed text-file format with a path
+to the packed grammar directory (Joshua will automatically detect that
+it is packed, since a packed grammar is a directory).</p>
+
+<p>Packing the grammar requires first sorting it by the rules source side,
+which can take quite a bit of temporary space.</p>
+
+<p><em>CAVEAT</em>: You may run into problems packing very very large Hiero
+ grammars. Email the support list if you do.</p>
+
+<h3 id="examples">Examples</h3>
+
+<p>A Hiero grammar, using the compressed text file version:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>tm = hiero -owner pt -maxspan 20 -path grammar.filtered.gz
+</code></pre>
+</div>
+
+<p>Pack it:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>$JOSHUA/scripts/support/grammar-packer.pl grammar.filtered.gz grammar.packed
+</code></pre>
+</div>
+
+<p>Pack a really big grammar:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>$JOSHUA/scripts/support/grammar-packer.pl -m 30g grammar.filtered.gz grammar.packed
+</code></pre>
+</div>
+
+<p>Be a little more verbose:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>$JOSHUA/scripts/support/grammar-packer.pl -m 30g grammar.filtered.gz grammar.packed
+</code></pre>
+</div>
+
+<p>You have a different temp file location:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>$JOSHUA/scripts/support/grammar-packer.pl -T /local grammar.filtered.gz grammar.packed
+</code></pre>
+</div>
+
+<p>Update the config file line:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>tm = hiero -owner pt -maxspan 20 -path grammar.packed
+</code></pre>
+</div>
+
+<h3 id="using-multiple-packed-grammars-joshua-605">Using multiple packed grammars (Joshua 6.0.5)</h3>
+
+<p>Packed grammars serialize their vocabularies which prevented the use of multiple
+packed grammars during decoding. With Joshua 6.0.5, it is possible to use multiple packed grammars during decoding if they have the same serialized vocabulary.
+This is achieved by packing these grammars jointly using a revised packing CLI.</p>
+
+<p>To pack multiple grammars:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>$JOSHUA/scripts/support/grammar-packer.pl grammar1.filtered.gz grammar2.filtered.gz [...] grammar1.packed grammar2.packed [...]
+</code></pre>
+</div>
+
+<p>This will produce two packed grammars with the same vocabulary. To use them in the decoder, put this in your <code class="highlighter-rouge">joshua.config</code>:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>tm = hiero -owner pt -maxspan 20 -path grammar1.packed
+tm = hiero -owner pt2 -maxspan 20 -path grammar2.packed
+</code></pre>
+</div>
+
+<p>Note the different owners.
+If you are trying to load multiple packed grammars that do not have the same
+vocabulary, the decoder will throw a RuntimeException at loading time:</p>
+
+<div class="highlighter-rouge"><pre class="highlight"><code>Exception in thread "main" java.lang.RuntimeException: Trying to load multiple packed grammars with different vocabularies! Have you packed them jointly?
+</code></pre>
+</div>
+
+
+          <!--   <h4 class="blog-post-title">Welcome to Joshua!</h4> -->
+
+          <!--   <p>This blog post shows a few different types of content that's supported and styled with Bootstrap. Basic typography, images, and code are all supported.</p> -->
+          <!--   <hr> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis <a href="#">dis parturient montes</a>, nascetur ridiculus mus. Aenean eu leo quam. Pellentesque ornare sem lacinia quam venenatis vestibulum. Sed posuere consectetur est at lobortis. Cras mattis consectetur purus sit amet fermentum.</p> -->
+          <!--   <blockquote> -->
+          <!--     <p>Curabitur blandit tempus porttitor. <strong>Nullam quis risus eget urna mollis</strong> ornare vel eu leo. Nullam id dolor id nibh ultricies vehicula ut id elit.</p> -->
+          <!--   </blockquote> -->
+          <!--   <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet fermentum. Aenean lacinia bibendum nulla sed consectetur.</p> -->
+          <!--   <h2>Heading</h2> -->
+          <!--   <p>Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Duis mollis, est non commodo luctus, nisi erat porttitor ligula, eget lacinia odio sem nec elit. Morbi leo risus, porta ac consectetur ac, vestibulum at eros.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.</p> -->
+          <!--   <pre><code>Example code block</code></pre> -->
+          <!--   <p>Aenean lacinia bibendum nulla sed consectetur. Etiam porta sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa.</p> -->
+          <!--   <h3>Sub-heading</h3> -->
+          <!--   <p>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aenean lacinia bibendum nulla sed consectetur. Etiam porta sem malesuada magna mollis euismod. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> -->
+          <!--   <ul> -->
+          <!--     <li>Praesent commodo cursus magna, vel scelerisque nisl consectetur et.</li> -->
+          <!--     <li>Donec id elit non mi porta gravida at eget metus.</li> -->
+          <!--     <li>Nulla vitae elit libero, a pharetra augue.</li> -->
+          <!--   </ul> -->
+          <!--   <p>Donec ullamcorper nulla non metus auctor fringilla. Nulla vitae elit libero, a pharetra augue.</p> -->
+          <!--   <ol> -->
+          <!--     <li>Vestibulum id ligula porta felis euismod semper.</li> -->
+          <!--     <li>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.</li> -->
+          <!--     <li>Maecenas sed diam eget risus varius blandit sit amet non magna.</li> -->
+          <!--   </ol> -->
+          <!--   <p>Cras mattis consectetur purus sit amet fermentum. Sed posuere consectetur est at lobortis.</p> -->
+          <!-- </div><\!-- /.blog-post -\-> -->
+
+        </div>
+
+      </div><!-- /.row -->
+
+      
+        
+    </div><!-- /.container -->
+
+    <!-- Bootstrap core JavaScript
+    ================================================== -->
+    <!-- Placed at the end of the document so the pages load faster -->
+    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
+    <script src="../../dist/js/bootstrap.min.js"></script>
+    <!-- <script src="../../assets/js/docs.min.js"></script> -->
+    <!-- IE10 viewport hack for Surface/desktop Windows 8 bug -->
+    <!-- <script src="../../assets/js/ie10-viewport-bug-workaround.js"></script>
+    -->
+
+    <!-- Start of StatCounter Code for Default Guide -->
+    <script type="text/javascript">
+      var sc_project=8264132; 
+      var sc_invisible=1; 
+      var sc_security="4b97fe2d"; 
+    </script>
+    <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>
+    <noscript>
+      <div class="statcounter">
+        <a title="hit counter joomla" 
+           href="http://statcounter.com/joomla/"
+           target="_blank">
+          <img class="statcounter"
+               src="http://c.statcounter.com/8264132/0/4b97fe2d/1/"
+               alt="hit counter joomla" />
+        </a>
+      </div>
+    </noscript>
+    <!-- End of StatCounter Code for Default Guide -->
+  </body>
+</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-joshua-site/blob/53cc3005/6.0/packing.md
----------------------------------------------------------------------
diff --git a/6.0/packing.md b/6.0/packing.md
deleted file mode 100644
index 8d84004..0000000
--- a/6.0/packing.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-layout: default6
-category: advanced
-title: Grammar Packing
----
-
-Grammar packing refers to the process of taking a textual grammar
-output by [Thrax](thrax.html) (or Moses, for phrase-based models) and
-efficiently encoding it so that it can be loaded
-[very quickly](https://aclweb.org/anthology/W/W12/W12-3134.pdf) ---
-packing the grammar results in significantly faster load times for
-very large grammars.  Packing is done automatically by the
-[Joshua pipeline](pipeline.html), but you can also run the packer
-manually.
-
-The script can be found at
-`$JOSHUA/scripts/support/grammar-packer.pl`. See that script for
-example usage. You can then add it to a Joshua config file, simply
-replacing a `tm` path to the compressed text-file format with a path
-to the packed grammar directory (Joshua will automatically detect that
-it is packed, since a packed grammar is a directory).
-
-Packing the grammar requires first sorting it by the rules source side,
-which can take quite a bit of temporary space.
-
-*CAVEAT*: You may run into problems packing very very large Hiero
- grammars. Email the support list if you do.
-
-### Examples
-
-A Hiero grammar, using the compressed text file version:
-
-    tm = hiero -owner pt -maxspan 20 -path grammar.filtered.gz
-
-Pack it:
-
-    $JOSHUA/scripts/support/grammar-packer.pl grammar.filtered.gz grammar.packed
-
-Pack a really big grammar:
-
-    $JOSHUA/scripts/support/grammar-packer.pl -m 30g grammar.filtered.gz grammar.packed
-
-Be a little more verbose:
-
-    $JOSHUA/scripts/support/grammar-packer.pl -m 30g grammar.filtered.gz grammar.packed
-
-You have a different temp file location:
-
-    $JOSHUA/scripts/support/grammar-packer.pl -T /local grammar.filtered.gz grammar.packed
-
-Update the config file line:
-
-    tm = hiero -owner pt -maxspan 20 -path grammar.packed
-
-### Using multiple packed grammars (Joshua 6.0.5)
-
-Packed grammars serialize their vocabularies which prevented the use of multiple
-packed grammars during decoding. With Joshua 6.0.5, it is possible to use multiple packed grammars during decoding if they have the same serialized vocabulary.
-This is achieved by packing these grammars jointly using a revised packing CLI.
-
-To pack multiple grammars:
-
-    $JOSHUA/scripts/support/grammar-packer.pl grammar1.filtered.gz grammar2.filtered.gz [...] grammar1.packed grammar2.packed [...]
-
-This will produce two packed grammars with the same vocabulary. To use them in the decoder, put this in your ```joshua.config```:
-
-    tm = hiero -owner pt -maxspan 20 -path grammar1.packed
-    tm = hiero -owner pt2 -maxspan 20 -path grammar2.packed
-
-Note the different owners.
-If you are trying to load multiple packed grammars that do not have the same
-vocabulary, the decoder will throw a RuntimeException at loading time:
-
-    Exception in thread "main" java.lang.RuntimeException: Trying to load multiple packed grammars with different vocabularies! Have you packed them jointly?