You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by shane knapp <sk...@berkeley.edu> on 2018/07/24 20:31:40 UTC

Re: Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

revisiting this thread...

i pushed a small change to some R test code (
https://github.com/apache/spark/pull/21864), and the appveyor build timed
out after 90 minutes:

https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2440-master

to be honest, i don't have a lot of time to debug *why* this happened, or
how to go about triggering another build, but at the very least we should
up the timeout.

On Sun, May 13, 2018 at 7:38 PM, Hyukjin Kwon <gu...@gmail.com> wrote:

> Yup, I am not saying it's required but might be better since that's
> written in the guide as so and at least am seeing rebase is more frequent.
> Also, usually merging commits trigger the AppVeyor build if it includes
> some changes in R
> It's fine to merge the commits but better to rebase to save AppVeyor
> resource and prevent such confusions.
>
>
> 2018-05-14 10:05 GMT+08:00 Holden Karau <ho...@pigscanfly.ca>:
>
>> On Sun, May 13, 2018 at 9:43 PM Hyukjin Kwon <gu...@gmail.com> wrote:
>>
>>> From a very quick look, I believe that's just occasional network issue
>>> in AppVeyor. For example, in this case:
>>>   Downloading: https://repo.maven.apache.org/
>>> maven2/org/scala-lang/scala-compiler/2.11.8/scala-compiler-2.11.8.jar
>>> This took 26ish mins and seems further downloading jars look mins much
>>> more than usual.
>>>
>>> FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins where
>>> usually ends up 1 hour 5 min.
>>> Will take another look to reduce the time if the usual time reaches 1
>>> hour and 30 mins (which is the current AppVeyor limit).
>>> I did this few times before - https://github.com/apache/spark/pull/19722
>>> and https://github.com/apache/spark/pull/19816.
>>>
>>> The timeout is already increased from 1 hour to 1 hour and 30 mins. They
>>> still look disallowing to increase timeout anymore.
>>> I contacted with them few times and manually requested this.
>>>
>>> For the best, I believe we usually just rebase rather than merging the
>>> commits in any case as mentioned in the contribution guide.
>>>
>> I don’t recal this being a thing that we actually go that far in
>> encouraging. The guide says rebases are one of the ways folks can keep
>> their PRs up to date, but no actually preference is stated. I tend to see
>> PRs from different folks doing either rebases or merges since we do squash
>> commits anyways.
>>
>> I know for some developers keeping their branch up to date merge commits
>> tend to be less effort, and provided the diff is still clear and the
>> resulting merge is also clean I don’t see an issue.
>>
>>> The test failure in the PR should be ignorable if that's not directly
>>> related with SparkR.
>>>
>>>
>>> Thanks.
>>>
>>>
>>>
>>> 2018-05-14 8:45 GMT+08:00 Ilan Filonenko <if...@cornell.edu>:
>>>
>>>> Hi dev,
>>>>
>>>> I recently updated an on-going PR [https://github.com/apache/spa
>>>> rk/pull/21092] that was updated with a merge that included a lot of
>>>> commits from master and I got the following error:
>>>>
>>>> *continuous-integration/appveyor/pr *— AppVeyor build failed
>>>>
>>>> due to:
>>>>
>>>> *Build execution time has reached the maximum allowed time for your
>>>> plan (90 minutes).*
>>>>
>>>> seen here: https://ci.appveyor.com/project/ApacheSoftwareFoundati
>>>> on/spark/build/2300-master
>>>>
>>>> As this is the first time I am seeing this, I am wondering if this is
>>>> in relation to a large merge and if it is, I am wondering if the timeout
>>>> can be increased.
>>>>
>>>> Thanks!
>>>>
>>>> Best,
>>>> Ilan Filonenko
>>>>
>>>
>>> --
>> Twitter: https://twitter.com/holdenkarau
>>
>
>


-- 
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu

Re: Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

Posted by Hyukjin Kwon <gu...@gmail.com>.
Eh, I believe we have been using this since we decided to use it
in SPARK-17200. (probably you mean Travis CI(?))

It's still two clicks of the mouse to close and reopen :-) but yea I get It
could be bothering and it's a workaround for clarification.
As far as I know, we should open an INFRA JIRA to request adding committers
to allow them retrigger AppVeyor CI builds via Web UI but I am not yet sure
if that's possible and haven't checked yet.
However, still In this case, I believe (contributors) authors of PRs should
still close and reopen manually to retrigger the tests though if I am not
mistaken.


2018년 7월 25일 (수) 오전 9:44, shane knapp <sk...@berkeley.edu>님이 작성:

> out of curiosity:  why are we using appveyor again?
>
> closing and reopening PRs solely to retrigger builds seems...  cumbersome.
>
> shane
>
> On Tue, Jul 24, 2018 at 6:09 PM, Hyukjin Kwon <gu...@gmail.com> wrote:
>
>> looks we are getting close indeed ...
>> Fortunately(?), looks still a bit unusual yet though given the history (
>> https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/history)
>>
>> As far as I know, simple workaround is just to close and reopen the PR
>> and it retriggers the build. I believe
>> this way already is rather being commonly used in other projects too.
>>
>> just FWIW, I talked about this here (
>> https://github.com/apache/spark/pull/20146#issuecomment-406132543) too
>> for possible solutions to handle this.
>>
>>
>>
>>
>> 2018년 7월 25일 (수) 오전 4:32, shane knapp <sk...@berkeley.edu>님이 작성:
>>
>>> revisiting this thread...
>>>
>>> i pushed a small change to some R test code (
>>> https://github.com/apache/spark/pull/21864), and the appveyor build
>>> timed out after 90 minutes:
>>>
>>>
>>> https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2440-master
>>>
>>> to be honest, i don't have a lot of time to debug *why* this happened,
>>> or how to go about triggering another build, but at the very least we
>>> should up the timeout.
>>>
>>> On Sun, May 13, 2018 at 7:38 PM, Hyukjin Kwon <gu...@gmail.com>
>>> wrote:
>>>
>>>> Yup, I am not saying it's required but might be better since that's
>>>> written in the guide as so and at least am seeing rebase is more
>>>> frequent.
>>>> Also, usually merging commits trigger the AppVeyor build if it includes
>>>> some changes in R
>>>> It's fine to merge the commits but better to rebase to save AppVeyor
>>>> resource and prevent such confusions.
>>>>
>>>>
>>>> 2018-05-14 10:05 GMT+08:00 Holden Karau <ho...@pigscanfly.ca>:
>>>>
>>>>> On Sun, May 13, 2018 at 9:43 PM Hyukjin Kwon <gu...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> From a very quick look, I believe that's just occasional network
>>>>>> issue in AppVeyor. For example, in this case:
>>>>>>   Downloading:
>>>>>> https://repo.maven.apache.org/maven2/org/scala-lang/scala-compiler/2.11.8/scala-compiler-2.11.8.jar
>>>>>> This took 26ish mins and seems further downloading jars look mins
>>>>>> much more than usual.
>>>>>>
>>>>>> FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins
>>>>>> where usually ends up 1 hour 5 min.
>>>>>> Will take another look to reduce the time if the usual time reaches 1
>>>>>> hour and 30 mins (which is the current AppVeyor limit).
>>>>>> I did this few times before -
>>>>>> https://github.com/apache/spark/pull/19722 and
>>>>>> https://github.com/apache/spark/pull/19816.
>>>>>>
>>>>>> The timeout is already increased from 1 hour to 1 hour and 30 mins.
>>>>>> They still look disallowing to increase timeout anymore.
>>>>>> I contacted with them few times and manually requested this.
>>>>>>
>>>>>> For the best, I believe we usually just rebase rather than merging
>>>>>> the commits in any case as mentioned in the contribution guide.
>>>>>>
>>>>> I don’t recal this being a thing that we actually go that far in
>>>>> encouraging. The guide says rebases are one of the ways folks can keep
>>>>> their PRs up to date, but no actually preference is stated. I tend to see
>>>>> PRs from different folks doing either rebases or merges since we do squash
>>>>> commits anyways.
>>>>>
>>>>> I know for some developers keeping their branch up to date merge
>>>>> commits tend to be less effort, and provided the diff is still clear and
>>>>> the resulting merge is also clean I don’t see an issue.
>>>>>
>>>>>> The test failure in the PR should be ignorable if that's not directly
>>>>>> related with SparkR.
>>>>>>
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2018-05-14 8:45 GMT+08:00 Ilan Filonenko <if...@cornell.edu>:
>>>>>>
>>>>>>> Hi dev,
>>>>>>>
>>>>>>> I recently updated an on-going PR [
>>>>>>> https://github.com/apache/spark/pull/21092] that was updated with a
>>>>>>> merge that included a lot of commits from master and I got the following
>>>>>>> error:
>>>>>>>
>>>>>>> *continuous-integration/appveyor/pr *— AppVeyor build failed
>>>>>>>
>>>>>>> due to:
>>>>>>>
>>>>>>> *Build execution time has reached the maximum allowed time for your
>>>>>>> plan (90 minutes).*
>>>>>>>
>>>>>>> seen here:
>>>>>>> https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2300-master
>>>>>>>
>>>>>>> As this is the first time I am seeing this, I am wondering if this
>>>>>>> is in relation to a large merge and if it is, I am wondering if the timeout
>>>>>>> can be increased.
>>>>>>>
>>>>>>> Thanks!
>>>>>>>
>>>>>>> Best,
>>>>>>> Ilan Filonenko
>>>>>>>
>>>>>>
>>>>>> --
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Shane Knapp
>>> UC Berkeley EECS Research / RISELab Staff Technical Lead
>>> https://rise.cs.berkeley.edu
>>>
>>
>
>
> --
> Shane Knapp
> UC Berkeley EECS Research / RISELab Staff Technical Lead
> https://rise.cs.berkeley.edu
>

Re: Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

Posted by shane knapp <sk...@berkeley.edu>.
out of curiosity:  why are we using appveyor again?

closing and reopening PRs solely to retrigger builds seems...  cumbersome.

shane

On Tue, Jul 24, 2018 at 6:09 PM, Hyukjin Kwon <gu...@gmail.com> wrote:

> looks we are getting close indeed ...
> Fortunately(?), looks still a bit unusual yet though given the history (
> https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/history)
>
> As far as I know, simple workaround is just to close and reopen the PR and
> it retriggers the build. I believe
> this way already is rather being commonly used in other projects too.
>
> just FWIW, I talked about this here (https://github.com/apache/
> spark/pull/20146#issuecomment-406132543) too for possible solutions to
> handle this.
>
>
>
>
> 2018년 7월 25일 (수) 오전 4:32, shane knapp <sk...@berkeley.edu>님이 작성:
>
>> revisiting this thread...
>>
>> i pushed a small change to some R test code (https://github.com/apache/
>> spark/pull/21864), and the appveyor build timed out after 90 minutes:
>>
>> https://ci.appveyor.com/project/ApacheSoftwareFoundation/
>> spark/build/2440-master
>>
>> to be honest, i don't have a lot of time to debug *why* this happened, or
>> how to go about triggering another build, but at the very least we should
>> up the timeout.
>>
>> On Sun, May 13, 2018 at 7:38 PM, Hyukjin Kwon <gu...@gmail.com>
>> wrote:
>>
>>> Yup, I am not saying it's required but might be better since that's
>>> written in the guide as so and at least am seeing rebase is more
>>> frequent.
>>> Also, usually merging commits trigger the AppVeyor build if it includes
>>> some changes in R
>>> It's fine to merge the commits but better to rebase to save AppVeyor
>>> resource and prevent such confusions.
>>>
>>>
>>> 2018-05-14 10:05 GMT+08:00 Holden Karau <ho...@pigscanfly.ca>:
>>>
>>>> On Sun, May 13, 2018 at 9:43 PM Hyukjin Kwon <gu...@gmail.com>
>>>> wrote:
>>>>
>>>>> From a very quick look, I believe that's just occasional network issue
>>>>> in AppVeyor. For example, in this case:
>>>>>   Downloading: https://repo.maven.apache.org/
>>>>> maven2/org/scala-lang/scala-compiler/2.11.8/scala-compiler-2.11.8.jar
>>>>> This took 26ish mins and seems further downloading jars look mins much
>>>>> more than usual.
>>>>>
>>>>> FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins
>>>>> where usually ends up 1 hour 5 min.
>>>>> Will take another look to reduce the time if the usual time reaches 1
>>>>> hour and 30 mins (which is the current AppVeyor limit).
>>>>> I did this few times before - https://github.com/apache/
>>>>> spark/pull/19722 and https://github.com/apache/spark/pull/19816.
>>>>>
>>>>> The timeout is already increased from 1 hour to 1 hour and 30 mins.
>>>>> They still look disallowing to increase timeout anymore.
>>>>> I contacted with them few times and manually requested this.
>>>>>
>>>>> For the best, I believe we usually just rebase rather than merging the
>>>>> commits in any case as mentioned in the contribution guide.
>>>>>
>>>> I don’t recal this being a thing that we actually go that far in
>>>> encouraging. The guide says rebases are one of the ways folks can keep
>>>> their PRs up to date, but no actually preference is stated. I tend to see
>>>> PRs from different folks doing either rebases or merges since we do squash
>>>> commits anyways.
>>>>
>>>> I know for some developers keeping their branch up to date merge
>>>> commits tend to be less effort, and provided the diff is still clear and
>>>> the resulting merge is also clean I don’t see an issue.
>>>>
>>>>> The test failure in the PR should be ignorable if that's not directly
>>>>> related with SparkR.
>>>>>
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>>
>>>>> 2018-05-14 8:45 GMT+08:00 Ilan Filonenko <if...@cornell.edu>:
>>>>>
>>>>>> Hi dev,
>>>>>>
>>>>>> I recently updated an on-going PR [https://github.com/apache/
>>>>>> spark/pull/21092] that was updated with a merge that included a lot
>>>>>> of commits from master and I got the following error:
>>>>>>
>>>>>> *continuous-integration/appveyor/pr *— AppVeyor build failed
>>>>>>
>>>>>> due to:
>>>>>>
>>>>>> *Build execution time has reached the maximum allowed time for your
>>>>>> plan (90 minutes).*
>>>>>>
>>>>>> seen here: https://ci.appveyor.com/project/ApacheSoftwareFoundation/
>>>>>> spark/build/2300-master
>>>>>>
>>>>>> As this is the first time I am seeing this, I am wondering if this is
>>>>>> in relation to a large merge and if it is, I am wondering if the timeout
>>>>>> can be increased.
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> Best,
>>>>>> Ilan Filonenko
>>>>>>
>>>>>
>>>>> --
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>
>>>
>>
>>
>> --
>> Shane Knapp
>> UC Berkeley EECS Research / RISELab Staff Technical Lead
>> https://rise.cs.berkeley.edu
>>
>


-- 
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu

Re: Build timeout -- continuous-integration/appveyor/pr — AppVeyor build failed

Posted by Hyukjin Kwon <gu...@gmail.com>.
looks we are getting close indeed ...
Fortunately(?), looks still a bit unusual yet though given the history (
https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/history)

As far as I know, simple workaround is just to close and reopen the PR and
it retriggers the build. I believe
this way already is rather being commonly used in other projects too.

just FWIW, I talked about this here (
https://github.com/apache/spark/pull/20146#issuecomment-406132543) too for
possible solutions to handle this.




2018년 7월 25일 (수) 오전 4:32, shane knapp <sk...@berkeley.edu>님이 작성:

> revisiting this thread...
>
> i pushed a small change to some R test code (
> https://github.com/apache/spark/pull/21864), and the appveyor build timed
> out after 90 minutes:
>
>
> https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2440-master
>
> to be honest, i don't have a lot of time to debug *why* this happened, or
> how to go about triggering another build, but at the very least we should
> up the timeout.
>
> On Sun, May 13, 2018 at 7:38 PM, Hyukjin Kwon <gu...@gmail.com> wrote:
>
>> Yup, I am not saying it's required but might be better since that's
>> written in the guide as so and at least am seeing rebase is more
>> frequent.
>> Also, usually merging commits trigger the AppVeyor build if it includes
>> some changes in R
>> It's fine to merge the commits but better to rebase to save AppVeyor
>> resource and prevent such confusions.
>>
>>
>> 2018-05-14 10:05 GMT+08:00 Holden Karau <ho...@pigscanfly.ca>:
>>
>>> On Sun, May 13, 2018 at 9:43 PM Hyukjin Kwon <gu...@gmail.com>
>>> wrote:
>>>
>>>> From a very quick look, I believe that's just occasional network issue
>>>> in AppVeyor. For example, in this case:
>>>>   Downloading:
>>>> https://repo.maven.apache.org/maven2/org/scala-lang/scala-compiler/2.11.8/scala-compiler-2.11.8.jar
>>>> This took 26ish mins and seems further downloading jars look mins much
>>>> more than usual.
>>>>
>>>> FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins where
>>>> usually ends up 1 hour 5 min.
>>>> Will take another look to reduce the time if the usual time reaches 1
>>>> hour and 30 mins (which is the current AppVeyor limit).
>>>> I did this few times before -
>>>> https://github.com/apache/spark/pull/19722 and
>>>> https://github.com/apache/spark/pull/19816.
>>>>
>>>> The timeout is already increased from 1 hour to 1 hour and 30 mins.
>>>> They still look disallowing to increase timeout anymore.
>>>> I contacted with them few times and manually requested this.
>>>>
>>>> For the best, I believe we usually just rebase rather than merging the
>>>> commits in any case as mentioned in the contribution guide.
>>>>
>>> I don’t recal this being a thing that we actually go that far in
>>> encouraging. The guide says rebases are one of the ways folks can keep
>>> their PRs up to date, but no actually preference is stated. I tend to see
>>> PRs from different folks doing either rebases or merges since we do squash
>>> commits anyways.
>>>
>>> I know for some developers keeping their branch up to date merge commits
>>> tend to be less effort, and provided the diff is still clear and the
>>> resulting merge is also clean I don’t see an issue.
>>>
>>>> The test failure in the PR should be ignorable if that's not directly
>>>> related with SparkR.
>>>>
>>>>
>>>> Thanks.
>>>>
>>>>
>>>>
>>>> 2018-05-14 8:45 GMT+08:00 Ilan Filonenko <if...@cornell.edu>:
>>>>
>>>>> Hi dev,
>>>>>
>>>>> I recently updated an on-going PR [
>>>>> https://github.com/apache/spark/pull/21092] that was updated with a
>>>>> merge that included a lot of commits from master and I got the following
>>>>> error:
>>>>>
>>>>> *continuous-integration/appveyor/pr *— AppVeyor build failed
>>>>>
>>>>> due to:
>>>>>
>>>>> *Build execution time has reached the maximum allowed time for your
>>>>> plan (90 minutes).*
>>>>>
>>>>> seen here:
>>>>> https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2300-master
>>>>>
>>>>> As this is the first time I am seeing this, I am wondering if this is
>>>>> in relation to a large merge and if it is, I am wondering if the timeout
>>>>> can be increased.
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Best,
>>>>> Ilan Filonenko
>>>>>
>>>>
>>>> --
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>
>
> --
> Shane Knapp
> UC Berkeley EECS Research / RISELab Staff Technical Lead
> https://rise.cs.berkeley.edu
>