You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2017/03/29 18:15:41 UTC

[jira] [Commented] (SPARK-18692) Test Java 8 unidoc build on Jenkins master builder

    [ https://issues.apache.org/jira/browse/SPARK-18692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15947627#comment-15947627 ] 

Josh Rosen commented on SPARK-18692:
------------------------------------

We can't get the full Jekyll doc build running until we have Jekyll installed on all workers, but the extra code to just test unidoc isn't that much:

{code}
diff --git a/dev/run-tests.py b/dev/run-tests.py
index 04035b3..46d6b8a 100755
--- a/dev/run-tests.py
+++ b/dev/run-tests.py
@@ -344,6 +344,19 @@ def build_spark_sbt(hadoop_version):
     exec_sbt(profiles_and_goals)


+def build_spark_unidoc_sbt(hadoop_version):
+    set_title_and_block("Building Unidoc API Documentation", "BLOCK_DOCUMENTATION")
+    # Enable all of the profiles for the build:
+    build_profiles = get_hadoop_profiles(hadoop_version) + modules.root.build_profile_flags
+    sbt_goals = ["unidoc"]
+    profiles_and_goals = build_profiles + sbt_goals
+
+    print("[info] Building Spark unidoc (w/Hive 1.2.1) using SBT with these arguments: ",
+          " ".join(profiles_and_goals))
+
+    exec_sbt(profiles_and_goals)
+
+
 def build_spark_assembly_sbt(hadoop_version):
     # Enable all of the profiles for the build:
     build_profiles = get_hadoop_profiles(hadoop_version) + modules.root.build_profile_flags
@@ -576,6 +589,8 @@ def main():
         # Since we did not build assembly/package before running dev/mima, we need to
         # do it here because the tests still rely on it; see SPARK-13294 for details.
         build_spark_assembly_sbt(hadoop_version)
+        # Make sure that Java and Scala API documentation can be generated
+        build_spark_unidoc_sbt(hadoop_version)

     # run the test suites
     run_scala_tests(build_tool, hadoop_version, test_modules, excluded_tags)
{code}

On my laptop this added about 1.5 minutes of extra run time. One problem that I noticed was that Unidoc appeared to be processing test sources: if we can find a way to exclude those from being processed in the first place then that might significantly speed things up.

It turns out that it's also possible to disable Java 8's strict doc validation, so we could consider that as well.

The master builder and PR builder should both be running Java 8 right now. The dedicated doc builder jobs are still using Java 7 (for convoluted legacy reasons) but I'll push a conf change to fix that.

Assuming that we want to use the stricter validation: [~hyukjin.kwon], could you help to fix the current Javadoc breaks and include the above diff to test the unidoc as part of our dev/run-tests process? I'll be happy to help review and merge this fix.

> Test Java 8 unidoc build on Jenkins master builder
> --------------------------------------------------
>
>                 Key: SPARK-18692
>                 URL: https://issues.apache.org/jira/browse/SPARK-18692
>             Project: Spark
>          Issue Type: Test
>          Components: Build, Documentation
>            Reporter: Joseph K. Bradley
>              Labels: jenkins
>
> [SPARK-3359] fixed the unidoc build for Java 8, but it is easy to break.  It would be great to add this build to the Spark master builder on Jenkins to make it easier to identify PRs which break doc builds.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org