You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by an...@apache.org on 2015/07/20 12:54:04 UTC

[54/68] [abbrv] flink-web git commit: regenerate site

regenerate site


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/094bb9b3
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/094bb9b3
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/094bb9b3

Branch: refs/heads/master
Commit: 094bb9b3e64275d19849b800edf5df3f9af7f717
Parents: bf8c853
Author: Maximilian Michels <mx...@apache.org>
Authored: Wed Jul 1 11:59:03 2015 +0200
Committer: Maximilian Michels <mx...@apache.org>
Committed: Wed Jul 1 11:59:03 2015 +0200

----------------------------------------------------------------------
 content/faq.html | 51 ++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 48 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/094bb9b3/content/faq.html
----------------------------------------------------------------------
diff --git a/content/faq.html b/content/faq.html
index 1eb3f37..b1e7c3d 100644
--- a/content/faq.html
+++ b/content/faq.html
@@ -5,7 +5,7 @@
     <meta http-equiv="X-UA-Compatible" content="IE=edge">
     <meta name="viewport" content="width=device-width, initial-scale=1">
     <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
-    <title>Apache Flink: F.A.Q.</title>
+    <title>Apache Flink: Frequently Asked Questions (FAQ)</title>
     <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
     <link rel="icon" href="/favicon.ico" type="image/x-icon">
 
@@ -140,9 +140,28 @@
 <div class="row">
   <div class="col-sm-8 col-sm-offset-2">
     <div class="row">
-      <div class="col-sm-12"><h1>F.A.Q.</h1></div>
+      <div class="col-sm-12"><h1>Frequently Asked Questions (FAQ)</h1></div>
     </div>
 
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
 <p>The following questions are frequently asked with regard to the Flink project <strong>in general</strong>. If you have further questions, make sure to consult the <a href="">documentation</a> or <a href="">ask the community</a>.</p>
 
 <div class="page-toc">
@@ -164,6 +183,7 @@
       <li><a href="#in-scala-api-i-get-an-error-about-implicit-values-and-evidence-parameters">In Scala API, I get an error about implicit values and evidence parameters</a></li>
       <li><a href="#i-get-an-error-message-saying-that-not-enough-buffers-are-available-how-do-i-fix-this">I get an error message saying that not enough buffers are available. How do I fix this?</a></li>
       <li><a href="#my-job-fails-early-with-a-javaioeofexception-what-could-be-the-cause">My job fails early with a java.io.EOFException. What could be the cause?</a></li>
+      <li><a href="#my-job-fails-with-various-exceptions-from-the-hdfshadoop-code-what-can-i-do">My job fails with various exceptions from the HDFS/Hadoop code. What can I do?</a></li>
       <li><a href="#in-eclipse-i-get-compilation-errors-in-the-scala-projects">In Eclipse, I get compilation errors in the Scala projects</a></li>
       <li><a href="#my-program-does-not-compute-the-correct-result-why-are-my-custom-key-types">My program does not compute the correct result. Why are my custom key types</a></li>
       <li><a href="#i-get-a-javalanginstantiationexception-for-my-data-type-what-is-wrong">I get a java.lang.InstantiationException for my data type, what is wrong?</a></li>
@@ -273,7 +293,7 @@ parallelism has to be 1 and set it accordingly.</p>
 <p>The parallelism can be set in numerous ways to ensure a fine-grained control
 over the execution of a Flink program. See
 the <a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#common-options">Configuration guide</a> for detailed instructions on how to
-set the parallelism. Also check out <a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-taskmanager-processing-slots">this figure</a> detailing 
+set the parallelism. Also check out <a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-taskmanager-processing-slots">this figure</a> detailing
 how the processing slots and parallelism are related to each other.</p>
 
 <h2 id="errors">Errors</h2>
@@ -343,6 +363,31 @@ breaks.</p>
 the https://github.com/apache/flink/tree/master/README.md
 for details on how to set up Flink for different Hadoop and HDFS versions.</p>
 
+<h3 id="my-job-fails-with-various-exceptions-from-the-hdfshadoop-code-what-can-i-do">My job fails with various exceptions from the HDFS/Hadoop code. What can I do?</h3>
+
+<p>Flink is shipping with the Hadoop 2.2 binaries by default. These binaries are used
+to connect to HDFS or YARN.
+It seems that there are some bugs in the HDFS client which cause exceptions while writing to HDFS
+(in particular under high load).
+Among the exceptions are the following:</p>
+
+<ul>
+  <li><code>HDFS client trying to connect to the standby Namenode "org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby"</code></li>
+  <li>
+    <p><code>java.io.IOException: Bad response ERROR for block BP-1335380477-172.22.5.37-1424696786673:blk_1107843111_34301064 from datanode 172.22.5.81:50010
+at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:732)</code></p>
+  </li>
+  <li><code>Caused by: org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): 0
+      at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:478)
+      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6039)
+      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6002)</code></li>
+</ul>
+
+<p>If you are experiencing any of these, we recommend using a Flink build with a Hadoop version matching
+your local HDFS version.
+You can also manually build Flink against the exact Hadoop version (for example
+when using a Hadoop distribution with a custom patch level)</p>
+
 <h3 id="in-eclipse-i-get-compilation-errors-in-the-scala-projects">In Eclipse, I get compilation errors in the Scala projects</h3>
 
 <p>Flink uses a new feature of the Scala compiler (called “quasiquotes”) that have not yet been properly