You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pig.apache.org by da...@apache.org on 2015/03/02 23:41:37 UTC
svn commit: r1663457 - in /pig/trunk: ./
src/docs/src/documentation/content/xdocs/
Author: daijy
Date: Mon Mar 2 22:41:36 2015
New Revision: 1663457
URL: http://svn.apache.org/r1663457
Log:
PIG-4440: Some code samples in documentation use Unicode left/right single quotes, which cause a parse failure
Modified:
pig/trunk/CHANGES.txt
pig/trunk/src/docs/src/documentation/content/xdocs/basic.xml
pig/trunk/src/docs/src/documentation/content/xdocs/func.xml
pig/trunk/src/docs/src/documentation/content/xdocs/perf.xml
pig/trunk/src/docs/src/documentation/content/xdocs/start.xml
pig/trunk/src/docs/src/documentation/content/xdocs/udf.xml
Modified: pig/trunk/CHANGES.txt
URL: http://svn.apache.org/viewvc/pig/trunk/CHANGES.txt?rev=1663457&r1=1663456&r2=1663457&view=diff
==============================================================================
--- pig/trunk/CHANGES.txt (original)
+++ pig/trunk/CHANGES.txt Mon Mar 2 22:41:36 2015
@@ -50,6 +50,9 @@ PIG-4333: Split BigData tests into multi
BUG FIXES
+PIG-4440: Some code samples in documentation use Unicode left/right single quotes, which cause a
+ parse failure (cnauroth via daijy)
+
PIG-4264: Port TestAvroStorage to tez local mode (daijy)
PIG-4437: Fix tez unit test failure TestJoinSmoke, TestSkewedJoin (daijy)
Modified: pig/trunk/src/docs/src/documentation/content/xdocs/basic.xml
URL: http://svn.apache.org/viewvc/pig/trunk/src/docs/src/documentation/content/xdocs/basic.xml?rev=1663457&r1=1663456&r2=1663457&view=diff
==============================================================================
--- pig/trunk/src/docs/src/documentation/content/xdocs/basic.xml (original)
+++ pig/trunk/src/docs/src/documentation/content/xdocs/basic.xml Mon Mar 2 22:41:36 2015
@@ -1629,7 +1629,7 @@ a: Schema for a unknown
<p>Having a deterministic schema is very powerful; however, sometimes it comes at the cost of performance. Consider the following example:</p>
<source>
-A = load âinputâ as (x, y, z);
+A = load 'input' as (x, y, z);
B = foreach A generate x+y;
</source>
@@ -5120,7 +5120,7 @@ X = FILTER A BY (f1 matches '.*apache.*'
<source>
A = load 'students' as (name:chararray, age:int, gpa:float);
B = foreach A generate (name, age);
-store B into âresultsâ;
+store B into 'results';
Input (students):
joe smith 20 3.5
@@ -5138,7 +5138,7 @@ Output (results):
<source>
A = load 'students' as (name:chararray, age:int, gpa:float);
B = foreach A generate {(name, age)}, {name, age};
-store B into âresultsâ;
+store B into 'results';
Input (students):
joe smith 20 3.5
@@ -5156,7 +5156,7 @@ Output (results):
<source>
A = load 'students' as (name:chararray, age:int, gpa:float);
B = foreach A generate [name, gpa];
-store B into âresultsâ;
+store B into 'results';
Input (students):
joe smith 20 3.5
@@ -7522,7 +7522,7 @@ DUMP A;
<source>
A = LOAD 'myfile.txt' AS (f1:int, f2:int, f3:int);
-A = LOAD 'myfile.txt' USING PigStorage(â\tâ) AS (f1:int, f2:int, f3:int);
+A = LOAD 'myfile.txt' USING PigStorage('\t') AS (f1:int, f2:int, f3:int);
DESCRIBE A;
a: {f1: int,f2: int,f3: int}
Modified: pig/trunk/src/docs/src/documentation/content/xdocs/func.xml
URL: http://svn.apache.org/viewvc/pig/trunk/src/docs/src/documentation/content/xdocs/func.xml?rev=1663457&r1=1663456&r2=1663457&view=diff
==============================================================================
--- pig/trunk/src/docs/src/documentation/content/xdocs/func.xml (original)
+++ pig/trunk/src/docs/src/documentation/content/xdocs/func.xml Mon Mar 2 22:41:36 2015
@@ -1369,15 +1369,15 @@ DUMP B;
<p>To work with gzip compressed files, input/output files need to have a .gz extension. Gzipped files cannot be split across multiple maps; this means that the number of maps created is equal to the number of part files in the input location.</p>
<source>
-A = load âmyinput.gzâ;
-store A into âmyoutput.gzâ;
+A = load 'myinput.gz';
+store A into 'myoutput.gz';
</source>
<p>To work with bzip compressed files, the input/output files need to have a .bz or .bz2 extension. Because the compression is block-oriented, bzipped files can be split across multiple maps.</p>
<source>
-A = load âmyinput.bzâ;
-store A into âmyoutput.bzâ;
+A = load 'myinput.bz';
+store A into 'myoutput.bz';
</source>
<p>Note: PigStorage and TextLoader correctly read compressed files as long as they are NOT CONCATENATED FILES generated in this manner: </p>
@@ -1525,7 +1525,7 @@ dump X;
<table>
<tr>
<td>
- <p>JsonLoader( [âschemaâ] ) </p>
+ <p>JsonLoader( ['schema'] ) </p>
</td>
</tr>
<tr>
@@ -1648,11 +1648,11 @@ STORE X INTO 'output' USING PigDump();
<p id="pigstorage-options">'options'</p>
</td>
<td>
- <p>A string that contains space-separated options (âoptionA optionB optionCâ)</p>
+ <p>A string that contains space-separated options ('optionA optionB optionC')</p>
<p>Currently supported options are:</p>
<ul>
- <li>(âschemaâ) - Stores the schema of the relation using a hidden JSON file.</li>
- <li>(ânoschemaâ) - Ignores a stored schema during the load.</li>
+ <li>('schema') - Stores the schema of the relation using a hidden JSON file.</li>
+ <li>('noschema') - Ignores a stored schema during the load.</li>
<li>('tagsource') - (deprecated, Use tagPath instead) Add a first column indicates the input file of the record.</li>
<li>('tagPath') - Add a first column indicates the input path of the record.</li>
<li>('tagFile') - Add a first column indicates the input file name of the record.</li>
@@ -5934,7 +5934,7 @@ In this example, student names (type cha
<source>
A = load 'students' as (name:chararray, age:int, gpa:float);
B = foreach A generate TOMAP(name, gpa);
-store B into âresultsâ;
+store B into 'results';
Input (students)
joe smith 20 3.5
Modified: pig/trunk/src/docs/src/documentation/content/xdocs/perf.xml
URL: http://svn.apache.org/viewvc/pig/trunk/src/docs/src/documentation/content/xdocs/perf.xml?rev=1663457&r1=1663456&r2=1663457&view=diff
==============================================================================
--- pig/trunk/src/docs/src/documentation/content/xdocs/perf.xml (original)
+++ pig/trunk/src/docs/src/documentation/content/xdocs/perf.xml Mon Mar 2 22:41:36 2015
@@ -978,11 +978,11 @@ B = GROUP A BY t PARALLEL 18;
<p>In this example all the MapReduce jobs that get launched use 20 reducers.</p>
<source>
SET default_parallel 20;
-A = LOAD âmyfile.txtâ USING PigStorage() AS (t, u, v);
+A = LOAD 'myfile.txt' USING PigStorage() AS (t, u, v);
B = GROUP A BY t;
C = FOREACH B GENERATE group, COUNT(A.t) as mycount;
D = ORDER C BY mycount;
-STORE D INTO âmysortedcountâ USING PigStorage();
+STORE D INTO 'mysortedcount' USING PigStorage();
</source>
</section>
Modified: pig/trunk/src/docs/src/documentation/content/xdocs/start.xml
URL: http://svn.apache.org/viewvc/pig/trunk/src/docs/src/documentation/content/xdocs/start.xml?rev=1663457&r1=1663456&r2=1663457&view=diff
==============================================================================
--- pig/trunk/src/docs/src/documentation/content/xdocs/start.xml (original)
+++ pig/trunk/src/docs/src/documentation/content/xdocs/start.xml Mon Mar 2 22:41:36 2015
@@ -227,7 +227,7 @@ grunt>
A = load 'passwd' using PigStorage(':'); -- load the passwd file
B = foreach A generate $0 as id; -- extract the user IDs
-store B into âid.outâ; -- write the results to a file name id.out
+store B into 'id.out'; -- write the results to a file name id.out
</source>
<p><strong>Local Mode</strong></p>
Modified: pig/trunk/src/docs/src/documentation/content/xdocs/udf.xml
URL: http://svn.apache.org/viewvc/pig/trunk/src/docs/src/documentation/content/xdocs/udf.xml?rev=1663457&r1=1663456&r2=1663457&view=diff
==============================================================================
--- pig/trunk/src/docs/src/documentation/content/xdocs/udf.xml (original)
+++ pig/trunk/src/docs/src/documentation/content/xdocs/udf.xml Mon Mar 2 22:41:36 2015
@@ -1760,8 +1760,8 @@ function complex(word){
<p>This Pig script registers the JavaScript UDF (udf.js).</p>
<source>
- register âudf.jsâ using javascript as myfuncs;
-A = load âdataâ as (a0:chararray, a1:int);
+register 'udf.js' using javascript as myfuncs;
+A = load 'data' as (a0:chararray, a1:int);
B = foreach A generate myfuncs.helloworld(), myfuncs.complex(a0);
... ...
</source>