You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@accumulo.apache.org by John Vines <vi...@apache.org> on 2013/05/07 17:10:58 UTC

Re: Releasing 1.5

I would also like to point out that hbase is putting out separate releases
for hadoop1 and hadoop2 (
http://www.apache.org/dyn/closer.cgi/hbase/hbase-0.95.0). They also have
support for both via maven, however they implemented a compatibility module
(https://issues.apache.org/jira/browse/HBASE-6405) which brings the schism
down to a single jar that needs to be interchanged. That may be something
we want to consider for 1.6.

The reason that I care about this is I'm working on things on top of
Accumulo, but against multiple versions of hadoop. I want to be able to
easily able to build against different versions of Accumulo 1.5 without
have to kill my local repo, reinstall accumulo built against my target
version of hadoop, etc. etc. It would be SOOOO much more convenient to just
switch my accumulo version from 1.5 to 1.5-hadoop2 and be done with it.


On Tue, Apr 30, 2013 at 12:32 AM, John Vines <vi...@apache.org> wrote:

> I've always been an advocate of sticking to vanilla compatibility, but
> maintaining ability to be compatible with other versions. Hadoop 2ish
> things are the first case where we are beginning to see broken run-time
> compatibility due to some API changes. While the fragmented state of hadoop
> creates a larger set of jars, even just hadoop 1 vs. hadoop2 is enough to
> break things. I think priority number 1 should be compile time
> compatibility with everything, followed by attempts for full runtime
> compatibility. Obviously this can't happen, but it can be achieved by
> identical source but split compiled resources, and I think that may be
> something we have to do. If we're putting in the legwork to know how to
> successfully run against hadoop_variant_8271, we may as well provide a
> compiled unit for it as well.
>
>
> On Tue, Apr 30, 2013 at 12:01 AM, Josh Elser <jo...@gmail.com> wrote:
>
>> Funny enough, I gothit by these shenanigans last night when I was trying
>> to run trunk against CDH3 locally. After working through jars that were
>> marked asprovidedand weren't, and then running into
>> https://issues.apache.org/**jira/browse/ACCUMULO-837<https://issues.apache.org/jira/browse/ACCUMULO-837>,
>> I threw in the towel and called it a night.
>>
>> I think one thing we can all agree upon is that the "fragmented" state of
>> Hadoop distributions is a pain to work around; however, we do have a very
>> broad coverage across that variance just on our committer list. Considering
>> Benson's comments on the subject of "supporting" non-Apache Hadoop
>> variants, I would think that it's in our best interest to provide some
>> level of warm-fuzzy in terms of support. I'm worried about making people
>> chase their tails just to get Accumulo up and running on their flavor of
>> choice.
>>
>> As far as what we distribute, I'm still of the mindset that support for
>> building Accumulo against other versions of Hadoop can be satisfied by
>> instructions on how to do so. Thus, I would say that Accumulo's default
>> dependency should continue to track Apache Hadoop's stable as it currently
>> does (maybe revisiting classifiers for 1.6?). I would say we can revisit
>> the subject of the src jars we publish when/if a flavor breaks Accumulo's
>> compilation.
>>
>> Thoughts?
>>
>>
>> On 4/26/2013 4:35 PM, John Vines wrote:
>>
>>> I had issues running a hadoop2 compiled version of accumulo against
>>> CDH4, I
>>> can't remember the specifics of it though.
>>>
>>>
>>> When I said specialized packaging, I was thinking of a naming convention
>>> to
>>> distinguish hadoop1 vs. hadoop2 ( vs. vendor-specific hadoop) compiled
>>> jars.
>>>
>>>
>>> On Fri, Apr 26, 2013 at 4:19 PM, Billie Rinaldi <
>>> billie.rinaldi@gmail.com>**wrote:
>>>
>>>  I'm not sure we are talking about actual vendor-specific code.  We are
>>>> deciding whether or not to create additional release tarballs that have
>>>> been compiled against various vendors' Hadoop-compatible file systems.
>>>> Assuming that we determine there is nothing prohibiting us from doing
>>>> this,
>>>> I think it would simply be up to the release manager (i.e. anyone who
>>>> assembles a release and calls a vote for it).  If someone cares enough
>>>> about a particular distribution to build and create an extra tarball,
>>>> they
>>>> can.  However, I don't think this is common for Apache projects --
>>>> additional packaging is usually left to supporting companies.  I haven't
>>>> even noticed any releases yet that come in Hadoop 1 and Hadoop 2
>>>> flavors.
>>>>
>>>> I haven't heard (until now) that Accumulo compiled against an
>>>> appropriate
>>>> version of Apache Hadoop will not work with CDH, but John says that's
>>>> the
>>>> case.  John, have you tried this?  Also, what is the "specialized
>>>> packaging" you referred to?
>>>>
>>>>
>>>> On Fri, Apr 26, 2013 at 12:32 PM, David Medinets
>>>> <da...@gmail.com>**wrote:
>>>>
>>>>  Does it make sense to put vendor-specific stuff under a
>>>>> contribs/vendors
>>>>> directory? Doing so would certainly indicate that we are
>>>>> vendor-agnostic.
>>>>> And give vendors an obvious place to contribute.
>>>>>
>>>>>
>>
>

Re: Releasing 1.5

Posted by David Medinets <da...@gmail.com>.
How many people are working full-time on hbase development?


On Tue, May 7, 2013 at 11:28 AM, John Vines <vi...@apache.org> wrote:

> I am more than content with that assessment
>
>
> On Tue, May 7, 2013 at 11:23 AM, Christopher <ct...@apache.org> wrote:
>
> > I would love to deploy additional artifacts using classifiers for
> > hadoop2. We may be able to support that for the jar artifacts in
> > Maven, with some minor profile tweaks to the POM. (Apache
> > infrastructure actually allows you to deploy many artifacts to a
> > staging repo, before closing that staging repo... so it's not
> > impossible to stage all the hadoop1 stuff, then stage some additional
> > stuff). I'll try that for RC2 (is there already a ticket open for
> > this?). However, the assemble module already uses classifiers because
> > multiple DEBs/RPMs are built in a single module (not following Maven
> > conventions), so it's going to take some additional project
> > refactoring in 1.6 before we could put out different
> > RPMs/DEBs/tarballs for hadoop2. I'm going to go out on a limb here and
> > say that the Maven artifacts for hadoop2 would be good enough for 1.5.
> >
> > --
> > Christopher L Tubbs II
> > http://gravatar.com/ctubbsii
> >
> >
> > On Tue, May 7, 2013 at 11:10 AM, John Vines <vi...@apache.org> wrote:
> > > I would also like to point out that hbase is putting out separate
> > releases
> > > for hadoop1 and hadoop2 (
> > > http://www.apache.org/dyn/closer.cgi/hbase/hbase-0.95.0). They also
> have
> > > support for both via maven, however they implemented a compatibility
> > module
> > > (https://issues.apache.org/jira/browse/HBASE-6405) which brings the
> > schism
> > > down to a single jar that needs to be interchanged. That may be
> something
> > > we want to consider for 1.6.
> > >
> > > The reason that I care about this is I'm working on things on top of
> > > Accumulo, but against multiple versions of hadoop. I want to be able to
> > > easily able to build against different versions of Accumulo 1.5 without
> > > have to kill my local repo, reinstall accumulo built against my target
> > > version of hadoop, etc. etc. It would be SOOOO much more convenient to
> > just
> > > switch my accumulo version from 1.5 to 1.5-hadoop2 and be done with it.
> > >
> > >
> > > On Tue, Apr 30, 2013 at 12:32 AM, John Vines <vi...@apache.org> wrote:
> > >
> > >> I've always been an advocate of sticking to vanilla compatibility, but
> > >> maintaining ability to be compatible with other versions. Hadoop 2ish
> > >> things are the first case where we are beginning to see broken
> run-time
> > >> compatibility due to some API changes. While the fragmented state of
> > hadoop
> > >> creates a larger set of jars, even just hadoop 1 vs. hadoop2 is enough
> > to
> > >> break things. I think priority number 1 should be compile time
> > >> compatibility with everything, followed by attempts for full runtime
> > >> compatibility. Obviously this can't happen, but it can be achieved by
> > >> identical source but split compiled resources, and I think that may be
> > >> something we have to do. If we're putting in the legwork to know how
> to
> > >> successfully run against hadoop_variant_8271, we may as well provide a
> > >> compiled unit for it as well.
> > >>
> > >>
> > >> On Tue, Apr 30, 2013 at 12:01 AM, Josh Elser <jo...@gmail.com>
> > wrote:
> > >>
> > >>> Funny enough, I gothit by these shenanigans last night when I was
> > trying
> > >>> to run trunk against CDH3 locally. After working through jars that
> were
> > >>> marked asprovidedand weren't, and then running into
> > >>> https://issues.apache.org/**jira/browse/ACCUMULO-837<
> > https://issues.apache.org/jira/browse/ACCUMULO-837>,
> > >>> I threw in the towel and called it a night.
> > >>>
> > >>> I think one thing we can all agree upon is that the "fragmented"
> state
> > of
> > >>> Hadoop distributions is a pain to work around; however, we do have a
> > very
> > >>> broad coverage across that variance just on our committer list.
> > Considering
> > >>> Benson's comments on the subject of "supporting" non-Apache Hadoop
> > >>> variants, I would think that it's in our best interest to provide
> some
> > >>> level of warm-fuzzy in terms of support. I'm worried about making
> > people
> > >>> chase their tails just to get Accumulo up and running on their flavor
> > of
> > >>> choice.
> > >>>
> > >>> As far as what we distribute, I'm still of the mindset that support
> for
> > >>> building Accumulo against other versions of Hadoop can be satisfied
> by
> > >>> instructions on how to do so. Thus, I would say that Accumulo's
> default
> > >>> dependency should continue to track Apache Hadoop's stable as it
> > currently
> > >>> does (maybe revisiting classifiers for 1.6?). I would say we can
> > revisit
> > >>> the subject of the src jars we publish when/if a flavor breaks
> > Accumulo's
> > >>> compilation.
> > >>>
> > >>> Thoughts?
> > >>>
> > >>>
> > >>> On 4/26/2013 4:35 PM, John Vines wrote:
> > >>>
> > >>>> I had issues running a hadoop2 compiled version of accumulo against
> > >>>> CDH4, I
> > >>>> can't remember the specifics of it though.
> > >>>>
> > >>>>
> > >>>> When I said specialized packaging, I was thinking of a naming
> > convention
> > >>>> to
> > >>>> distinguish hadoop1 vs. hadoop2 ( vs. vendor-specific hadoop)
> compiled
> > >>>> jars.
> > >>>>
> > >>>>
> > >>>> On Fri, Apr 26, 2013 at 4:19 PM, Billie Rinaldi <
> > >>>> billie.rinaldi@gmail.com>**wrote:
> > >>>>
> > >>>>  I'm not sure we are talking about actual vendor-specific code.  We
> > are
> > >>>>> deciding whether or not to create additional release tarballs that
> > have
> > >>>>> been compiled against various vendors' Hadoop-compatible file
> > systems.
> > >>>>> Assuming that we determine there is nothing prohibiting us from
> doing
> > >>>>> this,
> > >>>>> I think it would simply be up to the release manager (i.e. anyone
> who
> > >>>>> assembles a release and calls a vote for it).  If someone cares
> > enough
> > >>>>> about a particular distribution to build and create an extra
> tarball,
> > >>>>> they
> > >>>>> can.  However, I don't think this is common for Apache projects --
> > >>>>> additional packaging is usually left to supporting companies.  I
> > haven't
> > >>>>> even noticed any releases yet that come in Hadoop 1 and Hadoop 2
> > >>>>> flavors.
> > >>>>>
> > >>>>> I haven't heard (until now) that Accumulo compiled against an
> > >>>>> appropriate
> > >>>>> version of Apache Hadoop will not work with CDH, but John says
> that's
> > >>>>> the
> > >>>>> case.  John, have you tried this?  Also, what is the "specialized
> > >>>>> packaging" you referred to?
> > >>>>>
> > >>>>>
> > >>>>> On Fri, Apr 26, 2013 at 12:32 PM, David Medinets
> > >>>>> <da...@gmail.com>**wrote:
> > >>>>>
> > >>>>>  Does it make sense to put vendor-specific stuff under a
> > >>>>>> contribs/vendors
> > >>>>>> directory? Doing so would certainly indicate that we are
> > >>>>>> vendor-agnostic.
> > >>>>>> And give vendors an obvious place to contribute.
> > >>>>>>
> > >>>>>>
> > >>>
> > >>
> >
>

Re: Releasing 1.5

Posted by John Vines <vi...@apache.org>.
I am more than content with that assessment


On Tue, May 7, 2013 at 11:23 AM, Christopher <ct...@apache.org> wrote:

> I would love to deploy additional artifacts using classifiers for
> hadoop2. We may be able to support that for the jar artifacts in
> Maven, with some minor profile tweaks to the POM. (Apache
> infrastructure actually allows you to deploy many artifacts to a
> staging repo, before closing that staging repo... so it's not
> impossible to stage all the hadoop1 stuff, then stage some additional
> stuff). I'll try that for RC2 (is there already a ticket open for
> this?). However, the assemble module already uses classifiers because
> multiple DEBs/RPMs are built in a single module (not following Maven
> conventions), so it's going to take some additional project
> refactoring in 1.6 before we could put out different
> RPMs/DEBs/tarballs for hadoop2. I'm going to go out on a limb here and
> say that the Maven artifacts for hadoop2 would be good enough for 1.5.
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>
> On Tue, May 7, 2013 at 11:10 AM, John Vines <vi...@apache.org> wrote:
> > I would also like to point out that hbase is putting out separate
> releases
> > for hadoop1 and hadoop2 (
> > http://www.apache.org/dyn/closer.cgi/hbase/hbase-0.95.0). They also have
> > support for both via maven, however they implemented a compatibility
> module
> > (https://issues.apache.org/jira/browse/HBASE-6405) which brings the
> schism
> > down to a single jar that needs to be interchanged. That may be something
> > we want to consider for 1.6.
> >
> > The reason that I care about this is I'm working on things on top of
> > Accumulo, but against multiple versions of hadoop. I want to be able to
> > easily able to build against different versions of Accumulo 1.5 without
> > have to kill my local repo, reinstall accumulo built against my target
> > version of hadoop, etc. etc. It would be SOOOO much more convenient to
> just
> > switch my accumulo version from 1.5 to 1.5-hadoop2 and be done with it.
> >
> >
> > On Tue, Apr 30, 2013 at 12:32 AM, John Vines <vi...@apache.org> wrote:
> >
> >> I've always been an advocate of sticking to vanilla compatibility, but
> >> maintaining ability to be compatible with other versions. Hadoop 2ish
> >> things are the first case where we are beginning to see broken run-time
> >> compatibility due to some API changes. While the fragmented state of
> hadoop
> >> creates a larger set of jars, even just hadoop 1 vs. hadoop2 is enough
> to
> >> break things. I think priority number 1 should be compile time
> >> compatibility with everything, followed by attempts for full runtime
> >> compatibility. Obviously this can't happen, but it can be achieved by
> >> identical source but split compiled resources, and I think that may be
> >> something we have to do. If we're putting in the legwork to know how to
> >> successfully run against hadoop_variant_8271, we may as well provide a
> >> compiled unit for it as well.
> >>
> >>
> >> On Tue, Apr 30, 2013 at 12:01 AM, Josh Elser <jo...@gmail.com>
> wrote:
> >>
> >>> Funny enough, I gothit by these shenanigans last night when I was
> trying
> >>> to run trunk against CDH3 locally. After working through jars that were
> >>> marked asprovidedand weren't, and then running into
> >>> https://issues.apache.org/**jira/browse/ACCUMULO-837<
> https://issues.apache.org/jira/browse/ACCUMULO-837>,
> >>> I threw in the towel and called it a night.
> >>>
> >>> I think one thing we can all agree upon is that the "fragmented" state
> of
> >>> Hadoop distributions is a pain to work around; however, we do have a
> very
> >>> broad coverage across that variance just on our committer list.
> Considering
> >>> Benson's comments on the subject of "supporting" non-Apache Hadoop
> >>> variants, I would think that it's in our best interest to provide some
> >>> level of warm-fuzzy in terms of support. I'm worried about making
> people
> >>> chase their tails just to get Accumulo up and running on their flavor
> of
> >>> choice.
> >>>
> >>> As far as what we distribute, I'm still of the mindset that support for
> >>> building Accumulo against other versions of Hadoop can be satisfied by
> >>> instructions on how to do so. Thus, I would say that Accumulo's default
> >>> dependency should continue to track Apache Hadoop's stable as it
> currently
> >>> does (maybe revisiting classifiers for 1.6?). I would say we can
> revisit
> >>> the subject of the src jars we publish when/if a flavor breaks
> Accumulo's
> >>> compilation.
> >>>
> >>> Thoughts?
> >>>
> >>>
> >>> On 4/26/2013 4:35 PM, John Vines wrote:
> >>>
> >>>> I had issues running a hadoop2 compiled version of accumulo against
> >>>> CDH4, I
> >>>> can't remember the specifics of it though.
> >>>>
> >>>>
> >>>> When I said specialized packaging, I was thinking of a naming
> convention
> >>>> to
> >>>> distinguish hadoop1 vs. hadoop2 ( vs. vendor-specific hadoop) compiled
> >>>> jars.
> >>>>
> >>>>
> >>>> On Fri, Apr 26, 2013 at 4:19 PM, Billie Rinaldi <
> >>>> billie.rinaldi@gmail.com>**wrote:
> >>>>
> >>>>  I'm not sure we are talking about actual vendor-specific code.  We
> are
> >>>>> deciding whether or not to create additional release tarballs that
> have
> >>>>> been compiled against various vendors' Hadoop-compatible file
> systems.
> >>>>> Assuming that we determine there is nothing prohibiting us from doing
> >>>>> this,
> >>>>> I think it would simply be up to the release manager (i.e. anyone who
> >>>>> assembles a release and calls a vote for it).  If someone cares
> enough
> >>>>> about a particular distribution to build and create an extra tarball,
> >>>>> they
> >>>>> can.  However, I don't think this is common for Apache projects --
> >>>>> additional packaging is usually left to supporting companies.  I
> haven't
> >>>>> even noticed any releases yet that come in Hadoop 1 and Hadoop 2
> >>>>> flavors.
> >>>>>
> >>>>> I haven't heard (until now) that Accumulo compiled against an
> >>>>> appropriate
> >>>>> version of Apache Hadoop will not work with CDH, but John says that's
> >>>>> the
> >>>>> case.  John, have you tried this?  Also, what is the "specialized
> >>>>> packaging" you referred to?
> >>>>>
> >>>>>
> >>>>> On Fri, Apr 26, 2013 at 12:32 PM, David Medinets
> >>>>> <da...@gmail.com>**wrote:
> >>>>>
> >>>>>  Does it make sense to put vendor-specific stuff under a
> >>>>>> contribs/vendors
> >>>>>> directory? Doing so would certainly indicate that we are
> >>>>>> vendor-agnostic.
> >>>>>> And give vendors an obvious place to contribute.
> >>>>>>
> >>>>>>
> >>>
> >>
>

Re: Releasing 1.5

Posted by Christopher <ct...@apache.org>.
I would love to deploy additional artifacts using classifiers for
hadoop2. We may be able to support that for the jar artifacts in
Maven, with some minor profile tweaks to the POM. (Apache
infrastructure actually allows you to deploy many artifacts to a
staging repo, before closing that staging repo... so it's not
impossible to stage all the hadoop1 stuff, then stage some additional
stuff). I'll try that for RC2 (is there already a ticket open for
this?). However, the assemble module already uses classifiers because
multiple DEBs/RPMs are built in a single module (not following Maven
conventions), so it's going to take some additional project
refactoring in 1.6 before we could put out different
RPMs/DEBs/tarballs for hadoop2. I'm going to go out on a limb here and
say that the Maven artifacts for hadoop2 would be good enough for 1.5.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Tue, May 7, 2013 at 11:10 AM, John Vines <vi...@apache.org> wrote:
> I would also like to point out that hbase is putting out separate releases
> for hadoop1 and hadoop2 (
> http://www.apache.org/dyn/closer.cgi/hbase/hbase-0.95.0). They also have
> support for both via maven, however they implemented a compatibility module
> (https://issues.apache.org/jira/browse/HBASE-6405) which brings the schism
> down to a single jar that needs to be interchanged. That may be something
> we want to consider for 1.6.
>
> The reason that I care about this is I'm working on things on top of
> Accumulo, but against multiple versions of hadoop. I want to be able to
> easily able to build against different versions of Accumulo 1.5 without
> have to kill my local repo, reinstall accumulo built against my target
> version of hadoop, etc. etc. It would be SOOOO much more convenient to just
> switch my accumulo version from 1.5 to 1.5-hadoop2 and be done with it.
>
>
> On Tue, Apr 30, 2013 at 12:32 AM, John Vines <vi...@apache.org> wrote:
>
>> I've always been an advocate of sticking to vanilla compatibility, but
>> maintaining ability to be compatible with other versions. Hadoop 2ish
>> things are the first case where we are beginning to see broken run-time
>> compatibility due to some API changes. While the fragmented state of hadoop
>> creates a larger set of jars, even just hadoop 1 vs. hadoop2 is enough to
>> break things. I think priority number 1 should be compile time
>> compatibility with everything, followed by attempts for full runtime
>> compatibility. Obviously this can't happen, but it can be achieved by
>> identical source but split compiled resources, and I think that may be
>> something we have to do. If we're putting in the legwork to know how to
>> successfully run against hadoop_variant_8271, we may as well provide a
>> compiled unit for it as well.
>>
>>
>> On Tue, Apr 30, 2013 at 12:01 AM, Josh Elser <jo...@gmail.com> wrote:
>>
>>> Funny enough, I gothit by these shenanigans last night when I was trying
>>> to run trunk against CDH3 locally. After working through jars that were
>>> marked asprovidedand weren't, and then running into
>>> https://issues.apache.org/**jira/browse/ACCUMULO-837<https://issues.apache.org/jira/browse/ACCUMULO-837>,
>>> I threw in the towel and called it a night.
>>>
>>> I think one thing we can all agree upon is that the "fragmented" state of
>>> Hadoop distributions is a pain to work around; however, we do have a very
>>> broad coverage across that variance just on our committer list. Considering
>>> Benson's comments on the subject of "supporting" non-Apache Hadoop
>>> variants, I would think that it's in our best interest to provide some
>>> level of warm-fuzzy in terms of support. I'm worried about making people
>>> chase their tails just to get Accumulo up and running on their flavor of
>>> choice.
>>>
>>> As far as what we distribute, I'm still of the mindset that support for
>>> building Accumulo against other versions of Hadoop can be satisfied by
>>> instructions on how to do so. Thus, I would say that Accumulo's default
>>> dependency should continue to track Apache Hadoop's stable as it currently
>>> does (maybe revisiting classifiers for 1.6?). I would say we can revisit
>>> the subject of the src jars we publish when/if a flavor breaks Accumulo's
>>> compilation.
>>>
>>> Thoughts?
>>>
>>>
>>> On 4/26/2013 4:35 PM, John Vines wrote:
>>>
>>>> I had issues running a hadoop2 compiled version of accumulo against
>>>> CDH4, I
>>>> can't remember the specifics of it though.
>>>>
>>>>
>>>> When I said specialized packaging, I was thinking of a naming convention
>>>> to
>>>> distinguish hadoop1 vs. hadoop2 ( vs. vendor-specific hadoop) compiled
>>>> jars.
>>>>
>>>>
>>>> On Fri, Apr 26, 2013 at 4:19 PM, Billie Rinaldi <
>>>> billie.rinaldi@gmail.com>**wrote:
>>>>
>>>>  I'm not sure we are talking about actual vendor-specific code.  We are
>>>>> deciding whether or not to create additional release tarballs that have
>>>>> been compiled against various vendors' Hadoop-compatible file systems.
>>>>> Assuming that we determine there is nothing prohibiting us from doing
>>>>> this,
>>>>> I think it would simply be up to the release manager (i.e. anyone who
>>>>> assembles a release and calls a vote for it).  If someone cares enough
>>>>> about a particular distribution to build and create an extra tarball,
>>>>> they
>>>>> can.  However, I don't think this is common for Apache projects --
>>>>> additional packaging is usually left to supporting companies.  I haven't
>>>>> even noticed any releases yet that come in Hadoop 1 and Hadoop 2
>>>>> flavors.
>>>>>
>>>>> I haven't heard (until now) that Accumulo compiled against an
>>>>> appropriate
>>>>> version of Apache Hadoop will not work with CDH, but John says that's
>>>>> the
>>>>> case.  John, have you tried this?  Also, what is the "specialized
>>>>> packaging" you referred to?
>>>>>
>>>>>
>>>>> On Fri, Apr 26, 2013 at 12:32 PM, David Medinets
>>>>> <da...@gmail.com>**wrote:
>>>>>
>>>>>  Does it make sense to put vendor-specific stuff under a
>>>>>> contribs/vendors
>>>>>> directory? Doing so would certainly indicate that we are
>>>>>> vendor-agnostic.
>>>>>> And give vendors an obvious place to contribute.
>>>>>>
>>>>>>
>>>
>>