You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Narine Kokhlikyan (JIRA)" <ji...@apache.org> on 2016/06/20 22:43:57 UTC
[jira] [Updated] (SPARK-16082) Refactor dapply's/dapplyCollect's
documentation - remove duplicated comments
[ https://issues.apache.org/jira/browse/SPARK-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Narine Kokhlikyan updated SPARK-16082:
--------------------------------------
Description:
Currently when we generate R documentation for dapply and dapplyCollect we see some duplicated information.
such as:
Arguments
``
x
A SparkDataFrame
func
A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.
schema
The schema of the resulting SparkDataFrame after the function is applied. It must match the output of func.
x
A SparkDataFrame
func
A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.
See Also
Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, createOrReplaceTempView, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, gapplyCollect, gapply, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, with, write.df, write.jdbc, write.json, write.parquet, write.text
Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, createOrReplaceTempView, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, gapplyCollect, gapply, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, with, write.df, write.jdbc, write.json, write.parquet, write.text
``
This happens because the @rdname of dapply and dapplyCollect refer to the same file.
was:
Currently when we generate R documentation for dapply and dapplyCollect we see some duplicated information.
such as:
Arguments
``
x
A SparkDataFrame
func
A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.
schema
The schema of the resulting SparkDataFrame after the function is applied. It must match the output of func.
x
A SparkDataFrame
func
A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.
See Also
Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, createOrReplaceTempView, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, gapplyCollect, gapply, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, with, write.df, write.jdbc, write.json, write.parquet, write.text
Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, createOrReplaceTempView, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, gapplyCollect, gapply, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, with, write.df, write.jdbc, write.json, write.parquet, write.text
``
This happens because the readme of dapply and dapplyCollect refer to the same rd file.
> Refactor dapply's/dapplyCollect's documentation - remove duplicated comments
> ----------------------------------------------------------------------------
>
> Key: SPARK-16082
> URL: https://issues.apache.org/jira/browse/SPARK-16082
> Project: Spark
> Issue Type: Bug
> Components: SparkR
> Reporter: Narine Kokhlikyan
> Priority: Minor
>
> Currently when we generate R documentation for dapply and dapplyCollect we see some duplicated information.
> such as:
> Arguments
> ``
> x
> A SparkDataFrame
> func
> A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.
> schema
> The schema of the resulting SparkDataFrame after the function is applied. It must match the output of func.
> x
> A SparkDataFrame
> func
> A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.
> See Also
> Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, createOrReplaceTempView, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, gapplyCollect, gapply, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, with, write.df, write.jdbc, write.json, write.parquet, write.text
> Other SparkDataFrame functions: SparkDataFrame-class, [[, agg, arrange, as.data.frame, attach, cache, collect, colnames, coltypes, columns, count, createOrReplaceTempView, describe, dim, distinct, dropDuplicates, dropna, drop, dtypes, except, explain, filter, first, gapplyCollect, gapply, group_by, head, histogram, insertInto, intersect, isLocal, join, limit, merge, mutate, ncol, persist, printSchema, rename, repartition, sample, saveAsTable, selectExpr, select, showDF, show, str, take, unionAll, unpersist, withColumn, with, write.df, write.jdbc, write.json, write.parquet, write.text
> ``
> This happens because the @rdname of dapply and dapplyCollect refer to the same file.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org