You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ch...@apache.org on 2017/09/05 09:44:31 UTC

flink-web git commit: [FLINK-7570] Fix broken/missing links on FAQ page

Repository: flink-web
Updated Branches:
  refs/heads/asf-site dfe4744ff -> fed999cd0


[FLINK-7570] Fix broken/missing links on FAQ page

This closes #83.


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/fed999cd
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/fed999cd
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/fed999cd

Branch: refs/heads/asf-site
Commit: fed999cd0c0d750839b23f065d1b56037f7b3237
Parents: dfe4744
Author: zhouhai02 <zh...@meituan.com>
Authored: Sat Sep 2 01:39:54 2017 +0800
Committer: zentol <ch...@apache.org>
Committed: Tue Sep 5 11:41:30 2017 +0200

----------------------------------------------------------------------
 content/faq.html | 28 ++++++++++++++--------------
 faq.md           | 28 ++++++++++++++--------------
 2 files changed, 28 insertions(+), 28 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/fed999cd/content/faq.html
----------------------------------------------------------------------
diff --git a/content/faq.html b/content/faq.html
index dd8a19e..1672fb2 100644
--- a/content/faq.html
+++ b/content/faq.html
@@ -156,7 +156,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-<p>The following questions are frequently asked with regard to the Flink project <strong>in general</strong>. If you have further questions, make sure to consult the <a href="">documentation</a> or <a href="">ask the community</a>.</p>
+<p>The following questions are frequently asked with regard to the Flink project <strong>in general</strong>. If you have further questions, make sure to consult the <a href="http://ci.apache.org/projects/flink/flink-docs-master">documentation</a> or <a href="/community.html">ask the community</a>.</p>
 
 <div class="page-toc">
 <ul id="markdown-toc">
@@ -224,7 +224,7 @@ File System (HDFS). To make these setups work out of the box, Flink bundles the
 Hadoop client libraries by default.</p>
 
 <p>Additionally, we provide a special YARN Enabled download of Flink for
-users with an existing Hadoop YARN cluster. <a href="http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/YARN.html">Apache Hadoop
+users with an existing Hadoop YARN cluster. <a href="http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html">Apache Hadoop
 YARN</a>
 is Hadoop’s cluster resource manager that allows use of
 different execution engines next to each other on a cluster.</p>
@@ -262,10 +262,10 @@ of the master and the worker where the exception occurred
 <h3 id="how-do-i-debug-flink-programs">How do I debug Flink programs?</h3>
 
 <ul>
-  <li>When you start a program locally with the <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/local_execution.html">LocalExecutor</a>,
+  <li>When you start a program locally with the <a href="http://ci.apache.org/projects/flink/flink-docs-master/dev/local_execution.html">LocalExecutor</a>,
 you can place breakpoints in your functions and debug them like normal
 Java/Scala programs.</li>
-  <li>The <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#accumulators--counters">Accumulators</a> are very helpful in
+  <li>The <a href="http://ci.apache.org/projects/flink/flink-docs-master/dev/api_concepts.html#accumulators--counters">Accumulators</a> are very helpful in
 tracking the behavior of the parallel execution. They allow you to gather
 information inside the program’s operations and show them after the program
 execution.</li>
@@ -289,15 +289,15 @@ parallelism has to be 1 and set it accordingly.</p>
 
 <p>The parallelism can be set in numerous ways to ensure a fine-grained control
 over the execution of a Flink program. See
-the <a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#common-options">Configuration guide</a> for detailed instructions on how to
-set the parallelism. Also check out <a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-taskmanager-processing-slots">this figure</a> detailing
+the <a href="http://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#common-options">Configuration guide</a> for detailed instructions on how to
+set the parallelism. Also check out <a href="http://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#configuring-taskmanager-processing-slots">this figure</a> detailing
 how the processing slots and parallelism are related to each other.</p>
 
 <h2 id="errors">Errors</h2>
 
 <h3 id="why-am-i-getting-a-nonserializableexception-">Why am I getting a “NonSerializableException” ?</h3>
 
-<p>All functions in Flink must be serializable, as defined by <a href="http://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html">java.io.Serializable</a>.
+<p>All functions in Flink must be serializable, as defined by <a href="http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html">java.io.Serializable</a>.
 Since all function interfaces are serializable, the exception means that one
 of the fields used in your function is not serializable.</p>
 
@@ -325,7 +325,7 @@ This can be achieved by using a context bound:</p>
   <span class="n">input</span><span class="o">.</span><span class="n">reduceGroup</span><span class="o">(</span> <span class="n">i</span> <span class="k">=&gt;</span> <span class="n">i</span><span class="o">.</span><span class="n">toSeq</span> <span class="o">)</span>
 <span class="o">}</span></code></pre></div>
 
-<p>See <a href="http://ci.apache.org/projects/flink/flink-docs-master/internals/types_serialization.html">Type Extraction and Serialization</a> for
+<p>See <a href="http://ci.apache.org/projects/flink/flink-docs-master/dev/types_serialization.html">Type Extraction and Serialization</a> for
 an in-depth discussion of how Flink handles types.</p>
 
 <h3 id="i-get-an-error-message-saying-that-not-enough-buffers-are-available-how-do-i-fix-this">I get an error message saying that not enough buffers are available. How do I fix this?</h3>
@@ -335,7 +335,7 @@ you need to adapt the number of network buffers via the config parameter
 <code>taskmanager.network.numberOfBuffers</code>.
 As a rule-of-thumb, the number of buffers should be at least
 <code>4 * numberOfTaskManagers * numberOfSlotsPerTaskManager^2</code>. See
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-the-network-buffers">Configuration Reference</a> for details.</p>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#configuring-the-network-buffers">Configuration Reference</a> for details.</p>
 
 <h3 id="my-job-fails-early-with-a-javaioeofexception-what-could-be-the-cause">My job fails early with a java.io.EOFException. What could be the cause?</h3>
 
@@ -356,7 +356,7 @@ breaks.</p>
     at org.apache.hadoop.hdfs.DistributedFileSystem.initialize<span class="o">(</span>DistributedFileSystem.java:82<span class="o">)</span>
     at org.apache.flinkruntime.fs.hdfs.DistributedFileSystem.initialize<span class="o">(</span>DistributedFileSystem.java:276</code></pre></div>
 
-<p>Please refer to the <a href="/downloads.html#maven">download page</a> and
+<p>Please refer to the <a href="/downloads.html">download page</a> and
 the <a href="https://github.com/apache/flink/tree/master/README.md">build instructions</a>
 for details on how to set up Flink for different Hadoop and HDFS versions.</p>
 
@@ -464,7 +464,7 @@ destage operations to disk, if necessary. By default, the system reserves around
 70% of the memory. If you frequently run applications that need more memory in
 the user-defined functions, you can reduce that value using the configuration
 entries <code>taskmanager.memory.fraction</code> or <code>taskmanager.memory.size</code>. See the
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html">Configuration Reference</a> for details. This will leave more memory to JVM heap,
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/ops/config.html">Configuration Reference</a> for details. This will leave more memory to JVM heap,
 but may cause data processing tasks to go to disk more often.</p>
   </li>
 </ol>
@@ -548,7 +548,7 @@ this happened. You see messages from Linux’ <a href="http://linux-mm.org/OOM_K
   <li>Native libraries (RocksDB)</li>
 </ul>
 
-<p>You can activate the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.0/setup/config.html#memory-and-performance-debugging">memory debug logger</a> to get more insight into what memory pool is actually using up too much memory.</p>
+<p>You can activate the <a href="http://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#memory-and-performance-debugging">memory debug logger</a> to get more insight into what memory pool is actually using up too much memory.</p>
 
 <h3 id="the-yarn-session-crashes-with-a-hdfs-permission-exception-during-startup">The YARN session crashes with a HDFS permission exception during startup</h3>
 
@@ -633,12 +633,12 @@ This mechanism is both efficient and flexible. See the documentation on <a href=
 
 <h3 id="are-hadoop-like-utilities-such-as-counters-and-the-distributedcache-supported">Are Hadoop-like utilities, such as Counters and the DistributedCache supported?</h3>
 
-<p><a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#accumulators--counters">Flink’s Accumulators</a> work very similar like
+<p><a href="http://ci.apache.org/projects/flink/flink-docs-master/dev/api_concepts.html#accumulators--counters">Flink’s Accumulators</a> work very similar like
 Hadoop’s counters, but are more powerful.</p>
 
 <p>Flink has a <a href="https://github.com/apache/flink/tree/master/flink-core/src/main/java/org/apache/flink/api/common/cache/DistributedCache.java">Distributed Cache</a> that is deeply integrated with the APIs. Please refer to the <a href="https://github.com/apache/flink/tree/master/flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L831">JavaDocs</a> for details on how to use it.</p>
 
-<p>In order to make data sets available on all tasks, we encourage you to use <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#broadcast-variables">Broadcast Variables</a> instead. They are more efficient and easier to use than the distributed cache.</p>
+<p>In order to make data sets available on all tasks, we encourage you to use <a href="http://ci.apache.org/projects/flink/flink-docs-master/dev/batch/index.html#broadcast-variables">Broadcast Variables</a> instead. They are more efficient and easier to use than the distributed cache.</p>
 
 
   </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/fed999cd/faq.md
----------------------------------------------------------------------
diff --git a/faq.md b/faq.md
index 0173b17..0d61be7 100755
--- a/faq.md
+++ b/faq.md
@@ -20,7 +20,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The following questions are frequently asked with regard to the Flink project **in general**. If you have further questions, make sure to consult the [documentation]() or [ask the community]().
+The following questions are frequently asked with regard to the Flink project **in general**. If you have further questions, make sure to consult the [documentation]({{site.docs-snapshot}}) or [ask the community]({{ site.baseurl }}/community.html).
 
 {% toc %}
 
@@ -50,7 +50,7 @@ Hadoop client libraries by default.
 
 Additionally, we provide a special YARN Enabled download of Flink for
 users with an existing Hadoop YARN cluster. [Apache Hadoop
-YARN](http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/YARN.html)
+YARN](http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)
 is Hadoop's cluster resource manager that allows use of
 different execution engines next to each other on a cluster.
 
@@ -82,10 +82,10 @@ of the master and the worker where the exception occurred
 
 ### How do I debug Flink programs?
 
-- When you start a program locally with the [LocalExecutor]({{site.docs-snapshot}}/apis/local_execution.html),
+- When you start a program locally with the [LocalExecutor]({{site.docs-snapshot}}/dev/local_execution.html),
 you can place breakpoints in your functions and debug them like normal
 Java/Scala programs.
-- The [Accumulators]({{ site.docs-snapshot }}/apis/programming_guide.html#accumulators--counters) are very helpful in
+- The [Accumulators]({{ site.docs-snapshot }}/dev/api_concepts.html#accumulators--counters) are very helpful in
 tracking the behavior of the parallel execution. They allow you to gather
 information inside the program's operations and show them after the program
 execution.
@@ -108,15 +108,15 @@ parallelism has to be 1 and set it accordingly.
 
 The parallelism can be set in numerous ways to ensure a fine-grained control
 over the execution of a Flink program. See
-the [Configuration guide]({{ site.docs-snapshot }}/setup/config.html#common-options) for detailed instructions on how to
-set the parallelism. Also check out [this figure]({{ site.docs-snapshot }}/setup/config.html#configuring-taskmanager-processing-slots) detailing
+the [Configuration guide]({{ site.docs-snapshot }}/ops/config.html#common-options) for detailed instructions on how to
+set the parallelism. Also check out [this figure]({{ site.docs-snapshot }}/ops/config.html#configuring-taskmanager-processing-slots) detailing
 how the processing slots and parallelism are related to each other.
 
 ## Errors
 
 ### Why am I getting a "NonSerializableException" ?
 
-All functions in Flink must be serializable, as defined by [java.io.Serializable](http://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html).
+All functions in Flink must be serializable, as defined by [java.io.Serializable](http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html).
 Since all function interfaces are serializable, the exception means that one
 of the fields used in your function is not serializable.
 
@@ -144,7 +144,7 @@ def myFunction[T: TypeInformation](input: DataSet[T]): DataSet[Seq[T]] = {
 }
 ~~~
 
-See [Type Extraction and Serialization]({{ site.docs-snapshot }}/internals/types_serialization.html) for
+See [Type Extraction and Serialization]({{ site.docs-snapshot }}/dev/types_serialization.html) for
 an in-depth discussion of how Flink handles types.
 
 ### I get an error message saying that not enough buffers are available. How do I fix this?
@@ -154,7 +154,7 @@ you need to adapt the number of network buffers via the config parameter
 `taskmanager.network.numberOfBuffers`.
 As a rule-of-thumb, the number of buffers should be at least
 `4 * numberOfTaskManagers * numberOfSlotsPerTaskManager^2`. See
-[Configuration Reference]({{ site.docs-snapshot }}/setup/config.html#configuring-the-network-buffers) for details.
+[Configuration Reference]({{ site.docs-snapshot }}/ops/config.html#configuring-the-network-buffers) for details.
 
 ### My job fails early with a java.io.EOFException. What could be the cause?
 
@@ -177,7 +177,7 @@ Call to <host:port> failed on local exception: java.io.EOFException
     at org.apache.flinkruntime.fs.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:276
 ~~~
 
-Please refer to the [download page]({{ site.baseurl }}/downloads.html#maven) and
+Please refer to the [download page]({{ site.baseurl }}/downloads.html) and
 the {% github README.md master "build instructions" %}
 for details on how to set up Flink for different Hadoop and HDFS versions.
 
@@ -276,7 +276,7 @@ destage operations to disk, if necessary. By default, the system reserves around
 70% of the memory. If you frequently run applications that need more memory in
 the user-defined functions, you can reduce that value using the configuration
 entries `taskmanager.memory.fraction` or `taskmanager.memory.size`. See the
-[Configuration Reference]({{ site.docs-snapshot }}/setup/config.html) for details. This will leave more memory to JVM heap,
+[Configuration Reference]({{ site.docs-snapshot }}/ops/config.html) for details. This will leave more memory to JVM heap,
 but may cause data processing tasks to go to disk more often.
 
 Another reason for OutOfMemoryExceptions is the use of the wrong state backend.
@@ -352,7 +352,7 @@ In that case, the JVM process grew too large. Because the Java heap size is alwa
   - PermGen space (strings and classes), code caches, memory mapped jar files
   - Native libraries (RocksDB)
 
-You can activate the [memory debug logger](https://ci.apache.org/projects/flink/flink-docs-release-1.0/setup/config.html#memory-and-performance-debugging) to get more insight into what memory pool is actually using up too much memory.
+You can activate the [memory debug logger]({{ site.docs-snapshot }}/ops/config.html#memory-and-performance-debugging) to get more insight into what memory pool is actually using up too much memory.
 
 
 ### The YARN session crashes with a HDFS permission exception during startup
@@ -442,9 +442,9 @@ For batch processing programs Flink remembers the program's sequence of transfor
 
 ### Are Hadoop-like utilities, such as Counters and the DistributedCache supported?
 
-[Flink's Accumulators]({{ site.docs-snapshot }}/apis/programming_guide.html#accumulators--counters) work very similar like
+[Flink's Accumulators]({{ site.docs-snapshot }}/dev/api_concepts.html#accumulators--counters) work very similar like
 Hadoop's counters, but are more powerful.
 
 Flink has a [Distributed Cache](https://github.com/apache/flink/tree/master/flink-core/src/main/java/org/apache/flink/api/common/cache/DistributedCache.java) that is deeply integrated with the APIs. Please refer to the [JavaDocs](https://github.com/apache/flink/tree/master/flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L831) for details on how to use it.
 
-In order to make data sets available on all tasks, we encourage you to use [Broadcast Variables]({{ site.docs-snapshot }}/apis/programming_guide.html#broadcast-variables) instead. They are more efficient and easier to use than the distributed cache.
+In order to make data sets available on all tasks, we encourage you to use [Broadcast Variables]({{ site.docs-snapshot }}/dev/batch/index.html#broadcast-variables) instead. They are more efficient and easier to use than the distributed cache.