You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cassandra.apache.org by Radim Kolar <hs...@sendmail.cz> on 2011/12/20 14:42:17 UTC

major version release schedule

http://www.mail-archive.com/dev@cassandra.apache.org/msg01549.html

I read it but things are different now because magic 1.0 is out. If you 
implement 1.0 and put it into production, you really do not want to 
retest app on new version every 4 months and its unlikely that you will 
get migration approved by management unless you present clear benefits 
for such migration. Compression was nice new feature of 1.0 but it was 
rejected by lot of IT managers  as "too risky" for now.

While you can test application quite easily, testing cluster stability 
is way harder in test environment because its not usually possible to 
fully replicate workload and data volume in test environment and 
migration back is difficult because Cassandra currently does not have 
tool for fast sstable downgrade (1.0 -> 0.8).

For production use long time between major releases is better. I would 
double time between major releases, maybe not for 1.1/1.2 but later for 
sure.  Take look at postgresql project, they release 1 major version per 
year and they support 4 major versions for bugfixes and older postgresql 
versions are still common in production.

Did you asked people running mission critical workloads about their 
opinion? Another possibility is to use ISV like Datastax to provide long 
term support.

Re: major version release schedule

Posted by Peter Schuller <pe...@infidyne.com>.
Here is another thing to consider: There is considerable cost involved
in running/developing on old branches as the divergence between the
version you're running and trunk increases.

For those actively doing development, such divergence actually causes
extra work and slows down development.

A more reasonable approach IMO is to make sure that important
*bugfixes* are backported to branches that are sufficiently old to
satisfy the criteria of the OP. But that is orthogonal to how often
new releases happen.

The OP compares with PostgreSQL, and they're in a similar position.
You can run on a fairly old version and still get critical bug fixes,
meaning that if you don't actually need the new version there is no
one telling you that you must upgrade.

It seems to me that what matters are mostly two things:

(1) When you *do* need/want to upgrade, that upgrade path you care
about being stable, and working, and the version you're upgrading too
should be stable.
(2) Critical fixes need still be maintained for the version you're
running (else you are in fact kind of forced to upgrade).

-- 
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)

Re: major version release schedule

Posted by Tatu Saloranta <ts...@gmail.com>.
On Tue, Dec 20, 2011 at 6:16 AM, Jonathan Ellis <jb...@gmail.com> wrote:
> Nobody's forcing you to upgrade.  If you want twice as much time
> between upgrading, just wait for 1.2.  In the meantime, people who
> need the features in 1.1 also get those early (no, running trunk in
> production isn't a serious option).  I don't see any real benefit for
> you in forcing your preference on everyone, and I see a big negative
> for some.
>
> It's also worth noting that waiting for 2x as many features for freeze
> will result in MORE than 2x as much complexity for tracking down
> regressions.  Given the limited testing we get during freeze, I think
> that's a pretty strong argument in favor of more-frequent, smaller
> releases.

+1. I really don't see why anyone would feel forced to upgrade just
because a new version is available.

-+ Tatu +-

Re: major version release schedule

Posted by Drew Kutcharian <dr...@venarc.com>.
I think there are couple of different ideas here at play

1) Time to release

2) Quality of the release

IMO, the issue that effects most people is the quality of the release. So when someone says that we should slow down the release cycles, I think what they mean is that we should spend more time improving the quality of the releases. Now if there is a process that can ensure the quality of the release, specially the newly added features, I don't think anyone would complain about have quick releases.

-- Drew


On Dec 20, 2011, at 4:15 PM, Peter Schuller wrote:

>> Until recently we were working hard to reach a set of goals that
>> culminated in a 1.0 release.  I'm not sure we've had a formal
>> discussion on it, but just talking to people, there seems to be
>> consensus around the idea that we're now shifting our goals and
>> priorities around some (usability, stability, etc).  If that's the
>> case, I think we should at least be open to reevaluating our release
>> process and schedule accordingly (whether that means lengthening,
>> shorting, and/or simply shifting the barrier-to-entry for stable
>> updates).
> 
> Personally I am all for added stability, quality, and testing. But I
> don't see how a decreased release frequency will cause more stability.
> It may be that decreased release frequency is the necessary *result*
> of more stability, but I don't think the causality points in the other
> direction unless developers ship things early to get it into the
> release.
> 
> But also keep in mind: If we reach a point where major users of
> Cassandra need to run on significantly divergent versions of Cassandra
> because the release is just too old, the "normal" mainstream release
> will end up getting even less testing.
> 
> -- 
> / Peter Schuller (@scode, http://worldmodscode.wordpress.com)


Re: major version release schedule

Posted by Peter Schuller <pe...@infidyne.com>.
> Until recently we were working hard to reach a set of goals that
> culminated in a 1.0 release.  I'm not sure we've had a formal
> discussion on it, but just talking to people, there seems to be
> consensus around the idea that we're now shifting our goals and
> priorities around some (usability, stability, etc).  If that's the
> case, I think we should at least be open to reevaluating our release
> process and schedule accordingly (whether that means lengthening,
> shorting, and/or simply shifting the barrier-to-entry for stable
> updates).

Personally I am all for added stability, quality, and testing. But I
don't see how a decreased release frequency will cause more stability.
It may be that decreased release frequency is the necessary *result*
of more stability, but I don't think the causality points in the other
direction unless developers ship things early to get it into the
release.

But also keep in mind: If we reach a point where major users of
Cassandra need to run on significantly divergent versions of Cassandra
because the release is just too old, the "normal" mainstream release
will end up getting even less testing.

-- 
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)

Re: major version release schedule

Posted by Eric Evans <ee...@acunu.com>.
On Tue, Dec 20, 2011 at 8:16 AM, Jonathan Ellis <jb...@gmail.com> wrote:
> Nobody's forcing you to upgrade.  If you want twice as much time
> between upgrading, just wait for 1.2.  In the meantime, people who
> need the features in 1.1 also get those early (no, running trunk in
> production isn't a serious option).  I don't see any real benefit for
> you in forcing your preference on everyone, and I see a big negative
> for some.
>
> It's also worth noting that waiting for 2x as many features for freeze
> will result in MORE than 2x as much complexity for tracking down
> regressions.  Given the limited testing we get during freeze, I think
> that's a pretty strong argument in favor of more-frequent, smaller
> releases.

Until recently we were working hard to reach a set of goals that
culminated in a 1.0 release.  I'm not sure we've had a formal
discussion on it, but just talking to people, there seems to be
consensus around the idea that we're now shifting our goals and
priorities around some (usability, stability, etc).  If that's the
case, I think we should at least be open to reevaluating our release
process and schedule accordingly (whether that means lengthening,
shorting, and/or simply shifting the barrier-to-entry for stable
updates).

> On Tue, Dec 20, 2011 at 7:42 AM, Radim Kolar <hs...@sendmail.cz> wrote:
>> http://www.mail-archive.com/dev@cassandra.apache.org/msg01549.html
>>
>> I read it but things are different now because magic 1.0 is out. If you
>> implement 1.0 and put it into production, you really do not want to retest
>> app on new version every 4 months and its unlikely that you will get
>> migration approved by management unless you present clear benefits for such
>> migration. Compression was nice new feature of 1.0 but it was rejected by
>> lot of IT managers  as "too risky" for now.

-- 
Eric Evans
Acunu | http://www.acunu.com | @acunu

Re: major version release schedule

Posted by Radim Kolar <hs...@sendmail.cz>.
> Nobody's forcing you to upgrade.  If you want twice as much time
> between upgrading, just wait for 1.2.
Currently 1.0 branch is still less stable then 0.8, i still get OOM on 
some nodes. Adding 1.1 feature set on top will make it less stable.
> It's also worth noting that waiting for 2x as many features for freeze
> will result in MORE than 2x as much complexity for tracking down
> regressions.
Then make releases from 2 branches - stable and dev. Its common practice 
used in lot of software projects.

>  I really don't see why anyone would feel forced to upgrade just because a new version is available.
you will get less likely bugfixes if new branch comes out. Also client libraries needs some time to catch release.



Re: major version release schedule

Posted by Jonathan Ellis <jb...@gmail.com>.
Nobody's forcing you to upgrade.  If you want twice as much time
between upgrading, just wait for 1.2.  In the meantime, people who
need the features in 1.1 also get those early (no, running trunk in
production isn't a serious option).  I don't see any real benefit for
you in forcing your preference on everyone, and I see a big negative
for some.

It's also worth noting that waiting for 2x as many features for freeze
will result in MORE than 2x as much complexity for tracking down
regressions.  Given the limited testing we get during freeze, I think
that's a pretty strong argument in favor of more-frequent, smaller
releases.

On Tue, Dec 20, 2011 at 7:42 AM, Radim Kolar <hs...@sendmail.cz> wrote:
> http://www.mail-archive.com/dev@cassandra.apache.org/msg01549.html
>
> I read it but things are different now because magic 1.0 is out. If you
> implement 1.0 and put it into production, you really do not want to retest
> app on new version every 4 months and its unlikely that you will get
> migration approved by management unless you present clear benefits for such
> migration. Compression was nice new feature of 1.0 but it was rejected by
> lot of IT managers  as "too risky" for now.
>
> While you can test application quite easily, testing cluster stability is
> way harder in test environment because its not usually possible to fully
> replicate workload and data volume in test environment and migration back is
> difficult because Cassandra currently does not have tool for fast sstable
> downgrade (1.0 -> 0.8).
>
> For production use long time between major releases is better. I would
> double time between major releases, maybe not for 1.1/1.2 but later for
> sure.  Take look at postgresql project, they release 1 major version per
> year and they support 4 major versions for bugfixes and older postgresql
> versions are still common in production.
>
> Did you asked people running mission critical workloads about their opinion?
> Another possibility is to use ISV like Datastax to provide long term
> support.



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com