You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2018/07/09 16:47:49 UTC
[6/6] spark-website git commit: Fix signature description broken in
PySpark API documentation in 2.2.2
Fix signature description broken in PySpark API documentation in 2.2.2
Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/7b3e459e
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/7b3e459e
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/7b3e459e
Branch: refs/heads/asf-site
Commit: 7b3e459e29e88e3dd626ea7a558df3ee7bdfff5a
Parents: 2b5ba2f
Author: hyukjinkwon <gu...@apache.org>
Authored: Mon Jul 9 23:18:32 2018 +0800
Committer: hyukjinkwon <gu...@apache.org>
Committed: Tue Jul 10 00:45:27 2018 +0800
----------------------------------------------------------------------
site/docs/2.2.2/api/python/pyspark.html | 22 +-
site/docs/2.2.2/api/python/pyspark.ml.html | 156 +++++------
site/docs/2.2.2/api/python/pyspark.mllib.html | 28 +-
site/docs/2.2.2/api/python/pyspark.sql.html | 264 +++++++++----------
.../2.2.2/api/python/pyspark.streaming.html | 3 +-
site/docs/2.2.2/api/python/searchindex.js | 2 +-
6 files changed, 238 insertions(+), 237 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/spark-website/blob/7b3e459e/site/docs/2.2.2/api/python/pyspark.html
----------------------------------------------------------------------
diff --git a/site/docs/2.2.2/api/python/pyspark.html b/site/docs/2.2.2/api/python/pyspark.html
index b82ee14..85d8922 100644
--- a/site/docs/2.2.2/api/python/pyspark.html
+++ b/site/docs/2.2.2/api/python/pyspark.html
@@ -264,7 +264,7 @@ Its format depends on the scheduler implementation.</p>
<li>in case of YARN something like ‘application_1433865536131_34483’</li>
</ul>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">sc</span><span class="o">.</span><span class="n">applicationId</span>
-<span class="go">u'local-...'</span>
+<span class="go">'local-...'</span>
</pre></div>
</div>
</dd></dl>
@@ -743,7 +743,7 @@ Spark 1.2)</p>
<span class="gp">... </span> <span class="n">_</span> <span class="o">=</span> <span class="n">testFile</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="s2">"Hello world!"</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="n">path</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">textFile</span><span class="o">.</span><span class="n">collect</span><span class="p">()</span>
-<span class="go">[u'Hello world!']</span>
+<span class="go">['Hello world!']</span>
</pre></div>
</div>
</dd></dl>
@@ -766,10 +766,10 @@ serializer:</p>
<span class="gp">... </span> <span class="n">_</span> <span class="o">=</span> <span class="n">testFile</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="s2">"Hello"</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="n">path</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">textFile</span><span class="o">.</span><span class="n">collect</span><span class="p">()</span>
-<span class="go">[u'Hello']</span>
+<span class="go">['Hello']</span>
<span class="gp">>>> </span><span class="n">parallelized</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="p">([</span><span class="s2">"World!"</span><span class="p">])</span>
<span class="gp">>>> </span><span class="nb">sorted</span><span class="p">(</span><span class="n">sc</span><span class="o">.</span><span class="n">union</span><span class="p">([</span><span class="n">textFile</span><span class="p">,</span> <span class="n">parallelized</span><span class="p">])</span><span class="o">.</span><span class="n">collect</span><span class="p">())</span>
-<span class="go">[u'Hello', 'World!']</span>
+<span class="go">['Hello', 'World!']</span>
</pre></div>
</div>
</dd></dl>
@@ -819,7 +819,7 @@ fully in memory.</p>
<span class="gp">... </span> <span class="n">_</span> <span class="o">=</span> <span class="n">file2</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="s2">"2"</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">textFiles</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">wholeTextFiles</span><span class="p">(</span><span class="n">dirPath</span><span class="p">)</span>
<span class="gp">>>> </span><span class="nb">sorted</span><span class="p">(</span><span class="n">textFiles</span><span class="o">.</span><span class="n">collect</span><span class="p">())</span>
-<span class="go">[(u'.../1.txt', u'1'), (u'.../2.txt', u'2')]</span>
+<span class="go">[('.../1.txt', '1'), ('.../2.txt', '2')]</span>
</pre></div>
</div>
</dd></dl>
@@ -1684,7 +1684,7 @@ If no storage level is specified defaults to (<code class="xref py py-class docu
<code class="descname">pipe</code><span class="sig-paren">(</span><em>command</em>, <em>env=None</em>, <em>checkCode=False</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/pyspark/rdd.html#RDD.pipe"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#pyspark.RDD.pipe" title="Permalink to this definition">¶</a></dt>
<dd><p>Return an RDD created by piping elements to a forked external process.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="p">([</span><span class="s1">'1'</span><span class="p">,</span> <span class="s1">'2'</span><span class="p">,</span> <span class="s1">''</span><span class="p">,</span> <span class="s1">'3'</span><span class="p">])</span><span class="o">.</span><span class="n">pipe</span><span class="p">(</span><span class="s1">'cat'</span><span class="p">)</span><span class="o">.</span><span class="n">collect</span><span class="p">()</span>
-<span class="go">[u'1', u'2', u'', u'3']</span>
+<span class="go">['1', '2', '', '3']</span>
</pre></div>
</div>
<table class="docutils field-list" frame="void" rules="none">
@@ -1799,7 +1799,7 @@ using <cite>coalesce</cite>, which can avoid performing a shuffle.</p>
<dl class="method">
<dt id="pyspark.RDD.repartitionAndSortWithinPartitions">
-<code class="descname">repartitionAndSortWithinPartitions</code><span class="sig-paren">(</span><em>numPartitions=None</em>, <em>partitionFunc=<function portable_hash></em>, <em>ascending=True</em>, <em>keyfunc=<function <lambda>></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/pyspark/rdd.html#RDD.repartitionAndSortWithinPartitions"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#pyspark.RDD.repartitionAndSortWithinPartitions" title="Permalink to this definition">¶</a></dt>
+<code class="descname">repartitionAndSortWithinPartitions</code><span class="sig-paren">(</span><em>numPartitions=None</em>, <em>partitionFunc=<function portable_hash></em>, <em>ascending=True</em>, <em>keyfunc=<function RDD.<lambda>></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/pyspark/rdd.html#RDD.repartitionAndSortWithinPartitions"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#pyspark.RDD.repartitionAndSortWithinPartitions" title="Permalink to this definition">¶</a></dt>
<dd><p>Repartition the RDD according to the given partitioner and, within each resulting partition,
sort records by their keys.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">rdd</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="p">([(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">5</span><span class="p">),</span> <span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">8</span><span class="p">),</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">6</span><span class="p">),</span> <span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">8</span><span class="p">),</span> <span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">8</span><span class="p">),</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p"
>)])</span>
@@ -2089,7 +2089,7 @@ RDD’s key and value types. The mechanism is as follows:</p>
<span class="gp">>>> </span><span class="kn">from</span> <span class="nn">fileinput</span> <span class="k">import</span> <span class="nb">input</span><span class="p">,</span> <span class="n">hook_compressed</span>
<span class="gp">>>> </span><span class="n">result</span> <span class="o">=</span> <span class="nb">sorted</span><span class="p">(</span><span class="nb">input</span><span class="p">(</span><span class="n">glob</span><span class="p">(</span><span class="n">tempFile3</span><span class="o">.</span><span class="n">name</span> <span class="o">+</span> <span class="s2">"/part*.gz"</span><span class="p">),</span> <span class="n">openhook</span><span class="o">=</span><span class="n">hook_compressed</span><span class="p">))</span>
<span class="gp">>>> </span><span class="sa">b</span><span class="s1">''</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">result</span><span class="p">)</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="s1">'utf-8'</span><span class="p">)</span>
-<span class="go">u'bar\nfoo\n'</span>
+<span class="go">'bar\nfoo\n'</span>
</pre></div>
</div>
</dd></dl>
@@ -2100,7 +2100,7 @@ RDD’s key and value types. The mechanism is as follows:</p>
<dd><p>Assign a name to this RDD.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">rdd1</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">])</span>
<span class="gp">>>> </span><span class="n">rdd1</span><span class="o">.</span><span class="n">setName</span><span class="p">(</span><span class="s1">'RDD1'</span><span class="p">)</span><span class="o">.</span><span class="n">name</span><span class="p">()</span>
-<span class="go">u'RDD1'</span>
+<span class="go">'RDD1'</span>
</pre></div>
</div>
</dd></dl>
@@ -2120,7 +2120,7 @@ RDD’s key and value types. The mechanism is as follows:</p>
<dl class="method">
<dt id="pyspark.RDD.sortByKey">
-<code class="descname">sortByKey</code><span class="sig-paren">(</span><em>ascending=True</em>, <em>numPartitions=None</em>, <em>keyfunc=<function <lambda>></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/pyspark/rdd.html#RDD.sortByKey"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#pyspark.RDD.sortByKey" title="Permalink to this definition">¶</a></dt>
+<code class="descname">sortByKey</code><span class="sig-paren">(</span><em>ascending=True</em>, <em>numPartitions=None</em>, <em>keyfunc=<function RDD.<lambda>></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/pyspark/rdd.html#RDD.sortByKey"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#pyspark.RDD.sortByKey" title="Permalink to this definition">¶</a></dt>
<dd><p>Sorts this RDD, which is assumed to consist of (key, value) pairs.
# noqa</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">tmp</span> <span class="o">=</span> <span class="p">[(</span><span class="s1">'a'</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="p">(</span><span class="s1">'b'</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="p">(</span><span class="s1">'1'</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="p">(</span><span class="s1">'d'</span><span class="p">,</span> <span class="mi">4</span><span class="p">),</span> <span class="p">(</span><span class="s1">'2'</span><span class="p">,</span> <span class="mi">5</span><span class="p">)]</span>
@@ -2664,7 +2664,7 @@ When batching is used, this will be called with an array of objects.</p>
<dl class="method">
<dt id="pyspark.PickleSerializer.loads">
-<code class="descname">loads</code><span class="sig-paren">(</span><em>obj</em>, <em>encoding=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/pyspark/serializers.html#PickleSerializer.loads"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#pyspark.PickleSerializer.loads" title="Permalink to this definition">¶</a></dt>
+<code class="descname">loads</code><span class="sig-paren">(</span><em>obj</em>, <em>encoding='bytes'</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/pyspark/serializers.html#PickleSerializer.loads"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#pyspark.PickleSerializer.loads" title="Permalink to this definition">¶</a></dt>
<dd><p>Deserialize an object from a byte array.</p>
</dd></dl>
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org