You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Yi Wu <yi...@databricks.com> on 2020/08/17 03:30:10 UTC

Re: [DISCUSS] Apache Spark 3.0.1 Release

Hi ruifeng, Thank you for your work. I have a backport PR for 3.0:
https://github.com/apache/spark/pull/29395. It waits for tests now.

Best,
Yi

On Wed, Aug 5, 2020 at 10:57 AM 郑瑞峰 <ru...@foxmail.com> wrote:

> Hi all,
> I am going to prepare the realease of 3.0.1 RC1, with the help of Wenchen.
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Jason Moore" <Ja...@quantium.com.au.INVALID>;
> *发送时间:* 2020年7月30日(星期四) 上午10:35
> *收件人:* "dev"<de...@spark.apache.org>;
> *主题:* Re: [DISCUSS] Apache Spark 3.0.1 Release
>
> Hi all,
>
>
>
> Discussion around 3.0.1 seems to have trickled away.  What was blocking
> the release process kicking off?  I can see some unresolved bugs raised
> against 3.0.0, but conversely there were quite a few critical correctness
> fixes waiting to be released.
>
>
>
> Cheers,
>
> Jason.
>
>
>
> *From: *Takeshi Yamamuro <li...@gmail.com>
> *Date: *Wednesday, 15 July 2020 at 9:00 am
> *To: *Shivaram Venkataraman <sh...@eecs.berkeley.edu>
> *Cc: *"dev@spark.apache.org" <de...@spark.apache.org>
> *Subject: *Re: [DISCUSS] Apache Spark 3.0.1 Release
>
>
>
> > Just wanted to check if there are any blockers that we are still waiting
> for to start the new release process.
>
> I don't see any on-going blocker in my area.
>
> Thanks for the notification.
>
>
>
> Bests,
>
> Tkaeshi
>
>
>
> On Wed, Jul 15, 2020 at 4:03 AM Dongjoon Hyun <do...@gmail.com>
> wrote:
>
> Hi, Yi.
>
>
>
> Could you explain why you think that is a blocker? For the given example
> from the JIRA description,
>
>
>
> spark.udf.register("key", udf((m: Map[String, String]) => m.keys.head.toInt))
>
> Seq(Map("1" -> "one", "2" -> "two")).toDF("a").createOrReplaceTempView("t")
>
> checkAnswer(sql("SELECT key(a) AS k FROM t GROUP BY key(a)"), Row(1) :: Nil)
>
>
>
> Apache Spark 3.0.0 seems to work like the following.
>
>
>
> scala> spark.version
>
> res0: String = 3.0.0
>
>
>
> scala> spark.udf.register("key", udf((m: Map[String, String]) =>
> m.keys.head.toInt))
>
> res1: org.apache.spark.sql.expressions.UserDefinedFunction =
> SparkUserDefinedFunction($Lambda$1958/948653928@5d6bed7b,IntegerType,List(Some(class[value[0]:
> map<string,string>])),None,false,true)
>
>
>
> scala> Seq(Map("1" -> "one", "2" ->
> "two")).toDF("a").createOrReplaceTempView("t")
>
>
>
> scala> sql("SELECT key(a) AS k FROM t GROUP BY key(a)").collect
>
> res3: Array[org.apache.spark.sql.Row] = Array([1])
>
>
>
> Could you provide a reproducible example?
>
>
>
> Bests,
>
> Dongjoon.
>
>
>
>
>
> On Tue, Jul 14, 2020 at 10:04 AM Yi Wu <yi...@databricks.com> wrote:
>
> This probably be a blocker:
> https://issues.apache.org/jira/browse/SPARK-32307
>
>
>
> On Tue, Jul 14, 2020 at 11:13 PM Sean Owen <sr...@gmail.com> wrote:
>
> https://issues.apache.org/jira/browse/SPARK-32234 ?
>
> On Tue, Jul 14, 2020 at 9:57 AM Shivaram Venkataraman
> <sh...@eecs.berkeley.edu> wrote:
> >
> > Hi all
> >
> > Just wanted to check if there are any blockers that we are still waiting
> for to start the new release process.
> >
> > Thanks
> > Shivaram
> >
>
>
>
>
> --
>
> ---
> Takeshi Yamamuro
>