You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by mu...@apache.org on 2017/01/18 20:08:58 UTC

svn commit: r1779374 - in /phoenix/site: publish/pherf.html source/src/site/markdown/pherf.md

Author: mujtaba
Date: Wed Jan 18 20:08:58 2017
New Revision: 1779374

URL: http://svn.apache.org/viewvc?rev=1779374&view=rev
Log:
Fix broken link and update startup text in pherf.md

Modified:
    phoenix/site/publish/pherf.html
    phoenix/site/source/src/site/markdown/pherf.md

Modified: phoenix/site/publish/pherf.html
URL: http://svn.apache.org/viewvc/phoenix/site/publish/pherf.html?rev=1779374&r1=1779373&r2=1779374&view=diff
==============================================================================
--- phoenix/site/publish/pherf.html (original)
+++ phoenix/site/publish/pherf.html Wed Jan 18 20:08:58 2017
@@ -1,7 +1,7 @@
 
 <!DOCTYPE html>
 <!--
- Generated by Apache Maven Doxia at 2016-12-08
+ Generated by Apache Maven Doxia at 2017-01-18
  Rendered using Reflow Maven Skin 1.1.0 (http://andriusvelykis.github.io/reflow-maven-skin)
 -->
 <html  xml:lang="en" lang="en">
@@ -159,43 +159,29 @@
   <h2 id="Overview">Overview</h2>
  </div> 
  <p>Pherf is a standalone tool that can perform performance and functional testing through Phoenix. Pherf can be used both generate highly customized data sets and to measure performance of SQL against that data.</p> 
-</div> 
-<div class="section"> 
- <h2 id="Build">Build</h2> 
- <p>Building Pherf is done through Phoenix’s normal build Maven process. Pherf can be built using two profiles.</p> 
- <ul> 
-  <li>Cluster ( <b>default</b> ) - This profile builds Pherf such that it can run along side an existing cluster. The dependencies are pulled from the HBase classpath.</li> 
-  <li>Standalone - This profile builds all of Pherf’s dependencies into a single standalone jar. The deps will be pulled from the versions specified in Phoenix’s pom.</li> 
- </ul> 
  <div class="section"> 
   <h3 id="Build_all_of_Phoenix._This_includes_Pherfs_default_profile">Build all of Phoenix. This includes Pherf’s default profile</h3> 
   <p><tt>mvn clean package -DskipTests</tt></p> 
  </div> 
- <div class="section"> 
-  <h3 id="Build_Phoenix_with_Pherfs_standalone_profile">Build Phoenix with Pherf’s standalone profile</h3> 
-  <p><tt>mvn clean package -P standalone -DskipTests</tt></p> 
- </div> 
 </div> 
 <div class="section"> 
- <h2 id="Installing">Installing</h2> 
- <p>When Pherf is built using the Maven commands specified above, it will produce a <a class="externalLink" href="https://github.com/apache/phoenix/tree/master/phoenix-pherf">zip</a> file in the module’s target directory.</p> 
- <ol style="list-style-type: decimal"> 
-  <li>Simply unpack this zip into the desired location.</li> 
-  <li>Edit the env.sh to include the required property values.</li> 
-  <li><tt>./pherf.sh -h</tt></li> 
-  <li>To test on a real cluster: <tt>./pherf.sh -drop all -l -q -z localhost -schemaFile .*user_defined_schema.sql -scenarioFile .*user_defined_scenario.xml</tt></li> 
-  <li>That’s it.</li> 
- </ol> 
+ <h2 id="Running">Running</h2> 
+ <ul> 
+  <li>Edit the config/env.sh to include the required property values.</li> 
+  <li><tt>bin/pherf-standalone.py -h</tt></li> 
+  <li>To use libraries included with HBase deployment on a cluster: <tt>bin/pherf-cluster.py -h</tt></li> 
+  <li>Example: <tt>bin/pherf-cluster.py -drop all -l -q -z [zookeeper] -schemaFile .*user_defined_schema.sql -scenarioFile .*user_defined_scenario.xml</tt> HBASE_CONF_DIR, HBASE_DIR environment variable needs to be set to use against a cluster deployment</li> 
+ </ul> 
 </div> 
 <div class="section"> 
  <h2 id="Example_run_commands.">Example run commands.</h2> 
  <div class="section"> 
   <h3 id="List_all_scenario_files_available_to_run.">List all scenario files available to run.</h3> 
-  <p>$./pherf.sh -listFiles</p> 
+  <p>$./pherf-standalone.py -listFiles</p> 
  </div> 
  <div class="section"> 
   <h3 id="Drop_all_existing_tables_load_and_query_data_specified_in_all_scenario_files.">Drop all existing tables, load and query data specified in all scenario files.</h3> 
-  <p>$./pherf.sh -drop all -l -q -z localhost </p> 
+  <p>$./pherf-standalone.py -drop all -l -q -z localhost </p> 
  </div> 
 </div> 
 <div class="section"> 
@@ -219,7 +205,7 @@
 </div> 
 <div class="section"> 
  <h2 id="Adding_Rules_for_Data_Creation">Adding Rules for Data Creation</h2> 
- <p>Review <a href="/src/test/resources/scenario/test_scenario.xml">test_scenario.xml</a> for syntax examples.<br /></p> 
+ <p>Review <a class="externalLink" href="https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blob;f=phoenix-pherf/src/test/resources/scenario/test_scenario.xml">test_scenario.xml</a> for syntax examples.<br /></p> 
  <ul> 
   <li>Rules are defined as <tt>&lt;columns /&gt;</tt> and are applied in the order they appear in file.</li> 
   <li>Rules of the same type override the values of a prior rule of the same type. If <tt>&lt;userDefined&gt;true&lt;/userDefined&gt;</tt> is set, rule will only apply override when type and name match the column name in Phoenix.</li> 
@@ -520,7 +506,7 @@
 		<div class="row">
 			<div class="span12">
 				<p class="pull-right"><a href="#">Back to top</a></p>
-				<p class="copyright">Copyright &copy;2016 <a href="http://www.apache.org">Apache Software Foundation</a>. All Rights Reserved.</p>
+				<p class="copyright">Copyright &copy;2017 <a href="http://www.apache.org">Apache Software Foundation</a>. All Rights Reserved.</p>
 			</div>
 		</div>
 	</div>

Modified: phoenix/site/source/src/site/markdown/pherf.md
URL: http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/pherf.md?rev=1779374&r1=1779373&r2=1779374&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/pherf.md (original)
+++ phoenix/site/source/src/site/markdown/pherf.md Wed Jan 18 20:08:58 2017
@@ -5,35 +5,24 @@
 
 Pherf is a standalone tool that can perform performance and functional testing through Phoenix. Pherf can be used both generate highly customized data sets and to measure performance of SQL against that data.
 
-## Build
-
-Building Pherf is done through Phoenix's normal build Maven process. Pherf can be built using two profiles.
-
-* Cluster ( **default** ) - This profile builds Pherf such that it can run along side an existing cluster. The dependencies are pulled from the HBase classpath.
-* Standalone              - This profile builds all of Pherf's dependencies into a single standalone jar. The deps will be pulled from the versions specified in Phoenix's pom.   
-
 ### Build all of Phoenix. This includes Pherf's default profile
 `mvn clean package -DskipTests`
 
-### Build Phoenix with Pherf's standalone profile
-`mvn clean package -P standalone -DskipTests`
-
-## Installing
-When Pherf is built using the Maven commands specified above, it will produce a [zip](https://github.com/apache/phoenix/tree/master/phoenix-pherf) file in the module's target directory.
+## Running
 
-1. Simply unpack this zip into the desired location.
-2. Edit the env.sh to include the required property values.
-3. `./pherf.sh -h`
-4. To test on a real cluster: `./pherf.sh -drop all -l -q -z localhost -schemaFile .*user_defined_schema.sql -scenarioFile .*user_defined_scenario.xml`
-5. That's it.
+* Edit the config/env.sh to include the required property values.
+* `bin/pherf-standalone.py -h`
+* To use libraries included with HBase deployment on a cluster: `bin/pherf-cluster.py -h`
+* Example: `bin/pherf-cluster.py -drop all -l -q -z [zookeeper] -schemaFile .*user_defined_schema.sql -scenarioFile .*user_defined_scenario.xml`
+HBASE_CONF_DIR, HBASE_DIR environment variable needs to be set to use against a cluster deployment
 
 ## Example run commands.
 
 ### List all scenario files available to run.
-$./pherf.sh -listFiles
+$./pherf-standalone.py -listFiles
 
 ### Drop all existing tables, load and query data specified in all scenario files.
-$./pherf.sh -drop all -l -q -z localhost 
+$./pherf-standalone.py -drop all -l -q -z localhost 
 
 ## Pherf arguments:
 
@@ -53,7 +42,7 @@ $./pherf.sh -drop all -l -q -z localhost
 - -rowCountOverride [number of rows] _Specify number of rows to be upserted rather than using row count specified in schema_ </ br>
 
 ## Adding Rules for Data Creation
-Review [test_scenario.xml](/src/test/resources/scenario/test_scenario.xml) 
+Review [test_scenario.xml](https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blob;f=phoenix-pherf/src/test/resources/scenario/test_scenario.xml) 
 for syntax examples.<br />
 
 * Rules are defined as `<columns />` and are applied in the order they appear in file.