You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2024/01/23 00:12:47 UTC
(spark) branch master updated: [MINOR][DOCS] Miscellaneous link and anchor fixes
This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 0a68e6ef1c54 [MINOR][DOCS] Miscellaneous link and anchor fixes
0a68e6ef1c54 is described below
commit 0a68e6ef1c54f702a352ee6665f9a1f52accc419
Author: Nicholas Chammas <ni...@gmail.com>
AuthorDate: Tue Jan 23 09:12:34 2024 +0900
[MINOR][DOCS] Miscellaneous link and anchor fixes
### What changes were proposed in this pull request?
Fix a handful of links and link anchors.
In Safari at least, link anchors are case-sensitive.
### Why are the changes needed?
Minor documentation cleanup.
### Does this PR introduce _any_ user-facing change?
Yes, minor documentation tweaks.
### How was this patch tested?
No testing beyond building the docs successfully.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #44824 from nchammas/minor-link-fixes.
Authored-by: Nicholas Chammas <ni...@gmail.com>
Signed-off-by: Hyukjin Kwon <gu...@apache.org>
---
docs/cloud-integration.md | 3 +--
docs/ml-guide.md | 3 +--
docs/mllib-evaluation-metrics.md | 2 +-
docs/rdd-programming-guide.md | 4 ++--
4 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/docs/cloud-integration.md b/docs/cloud-integration.md
index 52a7552fe8d4..7afbfef0b393 100644
--- a/docs/cloud-integration.md
+++ b/docs/cloud-integration.md
@@ -330,7 +330,7 @@ It is not available on Hadoop 3.3.4 or earlier.
IBM provide the Stocator output committer for IBM Cloud Object Storage and OpenStack Swift.
Source, documentation and releasea can be found at
-[https://github.com/CODAIT/stocator](Stocator - Storage Connector for Apache Spark).
+[Stocator - Storage Connector for Apache Spark](https://github.com/CODAIT/stocator).
## Cloud Committers and `INSERT OVERWRITE TABLE`
@@ -396,4 +396,3 @@ The Cloud Committer problem and hive-compatible solutions
* [The Manifest Committer for Azure and Google Cloud Storage](https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/manifest_committer.md)
* [A Zero-rename committer](https://github.com/steveloughran/zero-rename-committer/releases/).
* [Stocator: A High Performance Object Store Connector for Spark](http://arxiv.org/abs/1709.01812)
-
diff --git a/docs/ml-guide.md b/docs/ml-guide.md
index 572f61ef9735..132805e7bcd6 100644
--- a/docs/ml-guide.md
+++ b/docs/ml-guide.md
@@ -72,7 +72,7 @@ WARNING: Failed to load implementation from:dev.ludovic.netlib.blas.JNIBLAS
To use MLlib in Python, you will need [NumPy](http://www.numpy.org) version 1.4 or newer.
[^1]: To learn more about the benefits and background of system optimised natives, you may wish to
- watch Sam Halliday's ScalaX talk on [High Performance Linear Algebra in Scala](http://fommil.github.io/scalax14/#/).
+ watch Sam Halliday's ScalaX talk on [High Performance Linear Algebra in Scala](http://fommil.github.io/scalax14/).
# Highlights in 3.0
@@ -103,4 +103,3 @@ release of Spark:
# Migration Guide
The migration guide is now archived [on this page](ml-migration-guide.html).
-
diff --git a/docs/mllib-evaluation-metrics.md b/docs/mllib-evaluation-metrics.md
index 30acc3dc634b..aa587b26dca6 100644
--- a/docs/mllib-evaluation-metrics.md
+++ b/docs/mllib-evaluation-metrics.md
@@ -460,7 +460,7 @@ $$rel_D(r) = \begin{cases}1 & \text{if $r \in D$}, \\ 0 & \text{otherwise}.\end{
$p(k)=\frac{1}{M} \sum_{i=0}^{M-1} {\frac{1}{k} \sum_{j=0}^{\text{min}(Q_i, k) - 1} rel_{D_i}(R_i(j))}$
</td>
<td>
- <a href="https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_K">Precision at k</a> is a measure of
+ <a href="https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k">Precision at k</a> is a measure of
how many of the first k recommended documents are in the set of true relevant documents averaged across all
users. In this metric, the order of the recommendations is not taken into account.
</td>
diff --git a/docs/rdd-programming-guide.md b/docs/rdd-programming-guide.md
index b92b3da09c5c..2e0f9d3bd6ef 100644
--- a/docs/rdd-programming-guide.md
+++ b/docs/rdd-programming-guide.md
@@ -776,7 +776,7 @@ for other languages.
</div>
-### Understanding closures <a name="ClosuresLink"></a>
+### Understanding closures
One of the harder things about Spark is understanding the scope and life cycle of variables and methods when executing code across a cluster. RDD operations that modify variables outside of their scope can be a frequent source of confusion. In the example below we'll look at code that uses `foreach()` to increment a counter, but similar issues can occur for other operations as well.
#### Example
@@ -1120,7 +1120,7 @@ for details.
<tr>
<td> <b>foreach</b>(<i>func</i>) </td>
<td> Run a function <i>func</i> on each element of the dataset. This is usually done for side effects such as updating an <a href="#accumulators">Accumulator</a> or interacting with external storage systems.
- <br /><b>Note</b>: modifying variables other than Accumulators outside of the <code>foreach()</code> may result in undefined behavior. See <a href="#understanding-closures-a-nameclosureslinka">Understanding closures </a> for more details.</td>
+ <br /><b>Note</b>: modifying variables other than Accumulators outside of the <code>foreach()</code> may result in undefined behavior. See <a href="#understanding-closures">Understanding closures</a> for more details.</td>
</tr>
</table>
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org