You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucenenet.apache.org by Scott Lombard <lo...@gmail.com> on 2011/03/14 15:11:41 UTC

[Lucene.Net] Procedure for Commiting

I wanted to get a final agreement on how we want to handle commits to the
repository.  There have been discussions in a couple of different threads
about this topic.  I know patches, branches and just go for it has been
discussed and different people have different ideas.  I just wanted to know
what the group thinks is the way to handle commits in our project. 

In response to Digy for what I am doing providing a patch for each Lucene
change is probably going to get more confusing then helpful.  If patches are
what is wanted then I would advocate to create a branch.  I would then
commit to the branch and provide a patch that can be merged back into the
trunk when I am ready.  I still feel tracking the incremental changes has
value.

Scott

> -----Original Message-----
> From: digy digy [mailto:digydigy@gmail.com]
> Sent: Saturday, March 12, 2011 3:44 AM
> To: lucene-net-dev@lucene.apache.org
> Subject: Re: [Lucene.Net] svn commit: r1080881 - in
> /incubator/lucene.net/trunk/C#/src/Lucene.Net: Index/DocumentsWriter.cs
> Index/StoredFieldsWriter.cs Index/TermVectorsTermsWriter.cs
> Index/TermVectorsTermsWriterPerField.cs Store/RAMFile.cs
> Store/RAMOutputStr
> 
> It would be better to attach the patches to the issue before committing.
> So others can track what is going on.
> 
> DIGY
> 
> On Sat, Mar 12, 2011 at 9:20 AM, <sl...@apache.org> wrote:
> 
> > Author: slombard
> > Date: Sat Mar 12 07:20:44 2011
> > New Revision: 1080881
> >
> > URL: http://svn.apache.org/viewvc?rev=1080881&view=rev
> > Log:
> > [LUCENENET-399] (trunk) 2.9.3 - change LUCENE 2283: use shared byte[]
> pool
> > to buffer pending stored fields & term vectors during indexing; fixes
> > excessive memory usage for mixed tiny & big docs with many threads
> >
> > Modified:
> >    incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs
> >    incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/StoredFieldsWriter.cs
> >    incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriter.cs
> >    incubator/
> >
> lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriterPerField.cs
> >    incubator/lucene.net/trunk/C#/src/Lucene.Net/Store/RAMFile.cs
> >    incubator/lucene.net/trunk/C#/src/Lucene.Net/Store/RAMOutputStream.cs
> >
> > Modified: incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs
> > URL:
> >
> http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Ne
> t/Index/DocumentsWriter.cs?rev=1080881&r1=1080880&r2=1080881&view=diff
> >
> >
> ==========================================================================
> ====
> > ---
> incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs(orig
> inal)
> > +++
> incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.csSat
> Mar 12 07:20:44 2011
> > @@ -19,15 +19,16 @@ using System;
> >
> >  using Analyzer = Lucene.Net.Analysis.Analyzer;
> >  using Document = Lucene.Net.Documents.Document;
> > -using AlreadyClosedException = Lucene.Net.Store.AlreadyClosedException;
> > -using Directory = Lucene.Net.Store.Directory;
> > -using ArrayUtil = Lucene.Net.Util.ArrayUtil;
> > -using Constants = Lucene.Net.Util.Constants;
> >  using IndexSearcher = Lucene.Net.Search.IndexSearcher;
> >  using Query = Lucene.Net.Search.Query;
> >  using Scorer = Lucene.Net.Search.Scorer;
> >  using Similarity = Lucene.Net.Search.Similarity;
> >  using Weight = Lucene.Net.Search.Weight;
> > +using AlreadyClosedException = Lucene.Net.Store.AlreadyClosedException;
> > +using Directory = Lucene.Net.Store.Directory;
> > +using RAMFile = Lucene.Net.Store.RAMFile;
> > +using ArrayUtil = Lucene.Net.Util.ArrayUtil;
> > +using Constants = Lucene.Net.Util.Constants;
> >
> >  namespace Lucene.Net.Index
> >  {
> > @@ -104,7 +105,7 @@ namespace Lucene.Net.Index
> >                {
> >
> >                        internal override DocConsumer
> > GetChain(DocumentsWriter documentsWriter)
> > -                       {
> > +            {
> >                                /*
> >                                This is the current indexing chain:
> >
> > @@ -145,7 +146,8 @@ namespace Lucene.Net.Index
> >                        freeLevel = (long)
> > (IndexWriter.DEFAULT_RAM_BUFFER_SIZE_MB * 1024 * 1024 * 0.95);
> >                        maxBufferedDocs =
> > IndexWriter.DEFAULT_MAX_BUFFERED_DOCS;
> >                        skipDocWriter = new SkipDocWriter();
> > -                       byteBlockAllocator = new
> ByteBlockAllocator(this);
> > +                       byteBlockAllocator = new
> ByteBlockAllocator(this,
> > BYTE_BLOCK_SIZE);
> > +            perDocAllocator = new ByteBlockAllocator(this,
> > PER_DOC_BLOCK_SIZE);
> >                        waitQueue = new WaitQueue(this);
> >                }
> >
> > @@ -220,6 +222,59 @@ namespace Lucene.Net.Index
> >                        }
> >                }
> >
> > +        //Create and return a new DocWriterBuffer.
> > +        internal PerDocBuffer newPerDocBuffer()
> > +        {
> > +            return new PerDocBuffer(perDocAllocator);
> > +        }
> > +
> > +        /// <summary>RAMFile buffer for DocWriters.</summary>
> > +        internal class PerDocBuffer:RAMFile
> > +        {
> > +            public PerDocBuffer(ByteBlockAllocator perDocAllocator)
> > +                       {
> > +                               InitBlock(perDocAllocator);
> > +                       }
> > +            private void InitBlock(ByteBlockAllocator perDocAllocator)
> > +                       {
> > +                this.perDocAllocator = perDocAllocator;
> > +                       }
> > +            private ByteBlockAllocator perDocAllocator;
> > +
> > +            /// <summary>
> > +            ///  Allocate bytes used from shared pool.
> > +            /// </summary>
> > +            /// <param name="size">Size of new buffer.  Fixed at <see
> > cref="PER_DOC_BLOCK_SIZE"/>.</param>
> > +            /// <returns></returns>
> > +            protected internal byte[] newBuffer(int size)
> > +            {
> > +                System.Diagnostics.Debug.Assert(size ==
> > PER_DOC_BLOCK_SIZE);
> > +                return perDocAllocator.GetByteBlock(false);
> > +            }
> > +
> > +            //Recycle the bytes used.
> > +            internal void recycle()
> > +            {
> > +                lock(this)
> > +                {
> > +                    if (buffers.Count > 0)
> > +                    {
> > +                        SetLength(0);
> > +
> > +                        // Recycle the blocks
> > +                        int blockCount = buffers.Count;
> > +                        byte[][] blocks = new byte[blockCount][];
> > +                        buffers.CopyTo(blocks);
> > +                        perDocAllocator.RecycleByteBlocks(blocks, 0,
> > blockCount);
> > +                        buffers.Clear();
> > +                        sizeInBytes = 0;
> > +
> > +                        System.Diagnostics.Debug.Assert(NumBuffers() ==
> > 0);
> > +                    }
> > +                }
> > +            }
> > +        }
> > +
> >                /// <summary> The IndexingChain must define the {@link
> > #GetChain(DocumentsWriter)} method
> >                /// which returns the DocConsumer that the
> DocumentsWriter
> > calls to process the
> >                /// documents.
> > @@ -486,7 +541,7 @@ namespace Lucene.Net.Index
> >                internal void  Message(System.String message)
> >                {
> >                        if (infoStream != null)
> > -                               writer.Message("DW: " + message);
> > +                writer.Message("DW: " + message);
> >                }
> >
> >         internal System.Collections.Generic.IList<string> openFiles =
> new
> > System.Collections.Generic.List<string>();
> > @@ -1530,13 +1585,14 @@ namespace Lucene.Net.Index
> >
> >                internal class ByteBlockAllocator:ByteBlockPool.Allocator
> >                {
> > -                       public ByteBlockAllocator(DocumentsWriter
> > enclosingInstance)
> > +            public ByteBlockAllocator(DocumentsWriter
> enclosingInstance,
> > int blockSize)
> >                        {
> > -                               InitBlock(enclosingInstance);
> > +                               InitBlock(enclosingInstance, blockSize);
> >                        }
> > -                       private void  InitBlock(DocumentsWriter
> > enclosingInstance)
> > +            private void InitBlock(DocumentsWriter enclosingInstance,
> int
> > blockSize)
> >                        {
> >                                this.enclosingInstance =
> enclosingInstance;
> > +                this.blockSize = blockSize;
> >                        }
> >                        private DocumentsWriter enclosingInstance;
> >                        public DocumentsWriter Enclosing_Instance
> > @@ -1545,11 +1601,12 @@ namespace Lucene.Net.Index
> >                                {
> >                                        return enclosingInstance;
> >                                }
> > -
> >                        }
> > -
> > +
> > +            int blockSize;
> > +
> >                        internal System.Collections.ArrayList
> freeByteBlocks
> > = new System.Collections.ArrayList();
> > -
> > +
> >                        /* Allocate another byte[] from the shared pool
> */
> >                        public /*internal*/ override byte[]
> > GetByteBlock(bool trackAllocations)
> >                        {
> > @@ -1565,8 +1622,8 @@ namespace Lucene.Net.Index
> >                                                // things that don't
> track
> > allocations (term
> >                                                // vectors) and things
> that
> > do (freq/prox
> >                                                // postings).
> > -
> > Enclosing_Instance.numBytesAlloc +=
> > Lucene.Net.Index.DocumentsWriter.BYTE_BLOCK_SIZE;
> > -                                               b = new
> > byte[Lucene.Net.Index.DocumentsWriter.BYTE_BLOCK_SIZE];
> > +                        Enclosing_Instance.numBytesAlloc += blockSize;
> > +                        b = new byte[blockSize];
> >                                        }
> >                                        else
> >                                        {
> > @@ -1576,7 +1633,7 @@ namespace Lucene.Net.Index
> >                                                b = (byte[]) tempObject;
> >                                        }
> >                                        if (trackAllocations)
> > -
> > Enclosing_Instance.numBytesUsed +=
> > Lucene.Net.Index.DocumentsWriter.BYTE_BLOCK_SIZE;
> > +                        Enclosing_Instance.numBytesUsed += blockSize;
> >
> >  System.Diagnostics.Debug.Assert(Enclosing_Instance.numBytesUsed <=
> > Enclosing_Instance.numBytesAlloc);
> >                                        return b;
> >                                }
> > @@ -1656,13 +1713,20 @@ namespace Lucene.Net.Index
> >                {
> >                        lock (this)
> >                        {
> > -                               for (int i = start; i < end; i++)
> > -                                       freeIntBlocks.Add(blocks[i]);
> > +                for (int i = start; i < end; i++)
> > +                {
> > +                    freeIntBlocks.Add(blocks[i]);
> > +                }
> >                        }
> >                }
> >
> >                internal ByteBlockAllocator byteBlockAllocator;
> > -
> > +
> > +        internal const int PER_DOC_BLOCK_SIZE = 1024;
> > +
> > +        internal ByteBlockAllocator perDocAllocator;
> > +
> > +
> >                /* Initial chunk size of the shared char[] blocks used to
> >                store term text */
> >                internal const int CHAR_BLOCK_SHIFT = 14;
> > @@ -1708,7 +1772,9 @@ namespace Lucene.Net.Index
> >                        lock (this)
> >                        {
> >                                for (int i = 0; i < numBlocks; i++)
> > +                               {
> >                                        freeCharBlocks.Add(blocks[i]);
> > +                               }
> >                        }
> >                }
> >
> > @@ -1716,18 +1782,20 @@ namespace Lucene.Net.Index
> >                {
> >                        return System.String.Format(nf, "{0:f}", new
> > System.Object[] { (v / 1024F / 1024F) });
> >                }
> > -
> > -               /* We have three pools of RAM: Postings, byte blocks
> > -               * (holds freq/prox posting data) and char blocks (holds
> > -               * characters in the term).  Different docs require
> > -               * varying amount of storage from these three classes.
> > -               * For example, docs with many unique single-occurrence
> > -               * short terms will use up the Postings RAM and hardly
> any
> > -               * of the other two.  Whereas docs with very large terms
> > -               * will use alot of char blocks RAM and relatively less
> of
> > -               * the other two.  This method just frees allocations
> from
> > -               * the pools once we are over-budget, which balances the
> > -               * pools to match the current docs. */
> > +
> > +        /* We have four pools of RAM: Postings, byte blocks
> > +         * (holds freq/prox posting data), char blocks (holds
> > +         * characters in the term) and per-doc buffers (stored
> fields/term
> > vectors).
> > +         * Different docs require varying amount of storage from
> > +         * these four classes.
> > +         *
> > +         * For example, docs with many unique single-occurrence
> > +         * short terms will use up the Postings RAM and hardly any
> > +         * of the other two.  Whereas docs with very large terms
> > +         * will use alot of char blocks RAM and relatively less of
> > +         * the other two.  This method just frees allocations from
> > +         * the pools once we are over-budget, which balances the
> > +         * pools to match the current docs. */
> >                internal void  BalanceRAM()
> >                {
> >
> > @@ -1740,7 +1808,14 @@ namespace Lucene.Net.Index
> >                        {
> >
> >                                if (infoStream != null)
> > -                                       Message("  RAM: now balance
> > allocations: usedMB=" + ToMB(numBytesUsed) + " vs trigger=" +
> > ToMB(flushTrigger) + " allocMB=" + ToMB(numBytesAlloc) + " deletesMB=" +
> > ToMB(deletesRAMUsed) + " vs trigger=" + ToMB(freeTrigger) + "
> > byteBlockFree=" + ToMB(byteBlockAllocator.freeByteBlocks.Count *
> > BYTE_BLOCK_SIZE) + " charBlockFree=" + ToMB(freeCharBlocks.Count *
> > CHAR_BLOCK_SIZE * CHAR_NUM_BYTE));
> > +                    Message("  RAM: now balance allocations: usedMB=" +
> > ToMB(numBytesUsed) +
> > +                        " vs trigger=" + ToMB(flushTrigger) +
> > +                        " allocMB=" + ToMB(numBytesAlloc) +
> > +                        " deletesMB=" + ToMB(deletesRAMUsed) +
> > +                        " vs trigger=" + ToMB(freeTrigger) +
> > +                        " byteBlockFree=" +
> > ToMB(byteBlockAllocator.freeByteBlocks.Count * BYTE_BLOCK_SIZE) +
> > +                        " perDocFree=" +
> > ToMB(perDocAllocator.freeByteBlocks.Count * PER_DOC_BLOCK_SIZE) +
> > +                        " charBlockFree=" + ToMB(freeCharBlocks.Count *
> > CHAR_BLOCK_SIZE * CHAR_NUM_BYTE));
> >
> >                                long startBytesAlloc = numBytesAlloc +
> > deletesRAMUsed;
> >
> > @@ -1757,7 +1832,11 @@ namespace Lucene.Net.Index
> >
> >                                        lock (this)
> >                                        {
> > -                                               if (0 ==
> > byteBlockAllocator.freeByteBlocks.Count && 0 == freeCharBlocks.Count &&
> 0 ==
> > freeIntBlocks.Count && !any)
> > +                        if (0 == perDocAllocator.freeByteBlocks.Count
> > +                            && 0 ==
> > byteBlockAllocator.freeByteBlocks.Count
> > +                            && 0 == freeCharBlocks.Count
> > +                            && 0 == freeIntBlocks.Count
> > +                            && !any)
> >                                                {
> >                                                        // Nothing else
> to
> > free -- must flush now.
> >                                                        bufferIsFull =
> > numBytesUsed + deletesRAMUsed > flushTrigger;
> > @@ -1772,26 +1851,41 @@ namespace Lucene.Net.Index
> >                                                        break;
> >                                                }
> >
> > -                                               if ((0 == iter % 4) &&
> > byteBlockAllocator.freeByteBlocks.Count > 0)
> > +                                               if ((0 == iter % 5) &&
> > byteBlockAllocator.freeByteBlocks.Count > 0)
> >                                                {
> >
> >
> byteBlockAllocator.freeByteBlocks.RemoveAt(byteBlockAllocator.freeByteBloc
> ks.Count
> > - 1);
> >                                                        numBytesAlloc -=
> > BYTE_BLOCK_SIZE;
> >                                                }
> >
> > -                                               if ((1 == iter % 4) &&
> > freeCharBlocks.Count > 0)
> > +                                               if ((1 == iter % 5) &&
> > freeCharBlocks.Count > 0)
> >                                                {
> >
> >  freeCharBlocks.RemoveAt(freeCharBlocks.Count - 1);
> >                                                        numBytesAlloc -=
> > CHAR_BLOCK_SIZE * CHAR_NUM_BYTE;
> >                                                }
> >
> > -                                               if ((2 == iter % 4) &&
> > freeIntBlocks.Count > 0)
> > +                                               if ((2 == iter % 5) &&
> > freeIntBlocks.Count > 0)
> >                                                {
> >
> >  freeIntBlocks.RemoveAt(freeIntBlocks.Count - 1);
> >                                                        numBytesAlloc -=
> > INT_BLOCK_SIZE * INT_NUM_BYTE;
> >                                                }
> > +
> > +                        if ((3 == iter % 5) &&
> > perDocAllocator.freeByteBlocks.Count > 0)
> > +                        {
> > +                            // Remove upwards of 32 blocks (each block
> is
> > 1K)
> > +                            for (int i = 0; i < 32; ++i)
> > +                            {
> > +                                perDocAllocator.freeByteBlocks.RemoveAt
> > (perDocAllocator.freeByteBlocks.Count - 1);
> > +                                numBytesAlloc -= PER_DOC_BLOCK_SIZE;
> > +                                if
> (perDocAllocator.freeByteBlocks.Count
> > == 0)
> > +                                {
> > +                                    break;
> > +                                }
> > +                            }
> > +                        }
> > +
> >                                        }
> >
> > -                                       if ((3 == iter % 4) && any)
> > +                                       if ((4 == iter % 5) && any)
> >                                        // Ask consumer to free any
> recycled
> > state
> >                                                any = consumer.FreeRAM();
> >
> >
> > Modified: incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/StoredFieldsWriter.cs
> > URL:
> >
> http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Ne
> t/Index/StoredFieldsWriter.cs?rev=1080881&r1=1080880&r2=1080881&view=diff
> >
> >
> ==========================================================================
> ====
> > --- incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/StoredFieldsWriter.cs
> (original)
> > +++ incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/StoredFieldsWriter.cs Sat Mar
> 12
> > 07:20:44 2011
> > @@ -222,6 +222,8 @@ namespace Lucene.Net.Index
> >                        private void  InitBlock(StoredFieldsWriter
> > enclosingInstance)
> >                        {
> >                                this.enclosingInstance =
> enclosingInstance;
> > +                buffer = enclosingInstance.docWriter.newPerDocBuffer();
> > +                fdt = new RAMOutputStream(buffer);
> >                        }
> >                        private StoredFieldsWriter enclosingInstance;
> >                        public StoredFieldsWriter Enclosing_Instance
> > @@ -233,14 +235,14 @@ namespace Lucene.Net.Index
> >
> >                        }
> >
> > -                       // TODO: use something more memory efficient;
> for
> > small
> > -                       // docs the 1024 buffer size of RAMOutputStream
> > wastes alot
> > -                       internal RAMOutputStream fdt = new
> > RAMOutputStream();
> > +                       internal DocumentsWriter.PerDocBuffer buffer;
> > +                       internal RAMOutputStream fdt;
> >                        internal int numStoredFields;
> >
> >                        internal void  Reset()
> >                        {
> >                                fdt.Reset();
> > +                               buffer.recycle();
> >                                numStoredFields = 0;
> >                        }
> >
> > @@ -252,7 +254,7 @@ namespace Lucene.Net.Index
> >
> >                        public override long SizeInBytes()
> >                        {
> > -                               return fdt.SizeInBytes();
> > +                               return buffer.GetSizeInBytes();
> >                        }
> >
> >                        public override void  Finish()
> >
> > Modified: incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriter.cs
> > URL:
> >
> http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Ne
> t/Index/TermVectorsTermsWriter.cs?rev=1080881&r1=1080880&r2=1080881&view=d
> iff
> >
> >
> ==========================================================================
> ====
> > --- incubator/
> >
> lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriter.cs(origina
> l)
> > +++ incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriter.cs Sat
> Mar
> > 12 07:20:44 2011
> > @@ -231,8 +231,8 @@ namespace Lucene.Net.Index
> >                                                tvd.WriteVLong(pos -
> > lastPos);
> >                                                lastPos = pos;
> >                                        }
> > -                                       perDoc.tvf.WriteTo(tvf);
> > -                                       perDoc.tvf.Reset();
> > +                                       perDoc.perDocTvf.WriteTo(tvf);
> > +                                       perDoc.perDocTvf.Reset();
> >                                        perDoc.numVectorFields = 0;
> >                                }
> >
> > @@ -308,6 +308,8 @@ namespace Lucene.Net.Index
> >                        private void  InitBlock(TermVectorsTermsWriter
> > enclosingInstance)
> >                        {
> >                                this.enclosingInstance =
> enclosingInstance;
> > +                this.buffer =
> > enclosingInstance.docWriter.newPerDocBuffer();
> > +                this.perDocTvf = new RAMOutputStream(this.buffer);
> >                        }
> >                        private TermVectorsTermsWriter enclosingInstance;
> >                        public TermVectorsTermsWriter Enclosing_Instance
> > @@ -319,9 +321,9 @@ namespace Lucene.Net.Index
> >
> >                        }
> >
> > -                       // TODO: use something more memory efficient;
> for
> > small
> > -                       // docs the 1024 buffer size of RAMOutputStream
> > wastes alot
> > -                       internal RAMOutputStream tvf = new
> > RAMOutputStream();
> > +                       internal DocumentsWriter.PerDocBuffer buffer;
> > +                       internal RAMOutputStream perDocTvf;
> > +
> >                        internal int numVectorFields;
> >
> >                        internal int[] fieldNumbers = new int[1];
> > @@ -329,7 +331,8 @@ namespace Lucene.Net.Index
> >
> >                        internal void  Reset()
> >                        {
> > -                               tvf.Reset();
> > +                               perDocTvf.Reset();
> > +                               buffer.recycle();
> >                                numVectorFields = 0;
> >                        }
> >
> > @@ -347,13 +350,13 @@ namespace Lucene.Net.Index
> >                                        fieldPointers =
> > ArrayUtil.Grow(fieldPointers);
> >                                }
> >                                fieldNumbers[numVectorFields] =
> fieldNumber;
> > -                               fieldPointers[numVectorFields] =
> > tvf.GetFilePointer();
> > +                               fieldPointers[numVectorFields] =
> > perDocTvf.GetFilePointer();
> >                                numVectorFields++;
> >                        }
> >
> >                        public override long SizeInBytes()
> >                        {
> > -                               return tvf.SizeInBytes();
> > +                               return buffer.GetSizeInBytes();
> >                        }
> >
> >                        public override void  Finish()
> >
> > Modified: incubator/
> >
> lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriterPerField.cs
> > URL:
> >
> http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Ne
> t/Index/TermVectorsTermsWriterPerField.cs?rev=1080881&r1=1080880&r2=108088
> 1&view=diff
> >
> >
> ==========================================================================
> ====
> > --- incubator/
> >
> lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriterPerField.cs
> (original)
> > +++ incubator/
> >
> lucene.net/trunk/C#/src/Lucene.Net/Index/TermVectorsTermsWriterPerField.cs
> Sat Mar 12 07:20:44 2011
> > @@ -81,8 +81,8 @@ namespace Lucene.Net.Index
> >                                        perThread.doc =
> > termsWriter.GetPerDoc();
> >                                        perThread.doc.docID =
> > docState.docID;
> >
> >  System.Diagnostics.Debug.Assert(perThread.doc.numVectorFields == 0);
> > -
> System.Diagnostics.Debug.Assert(0
> > == perThread.doc.tvf.Length());
> > -
> System.Diagnostics.Debug.Assert(0
> > == perThread.doc.tvf.GetFilePointer());
> > +
> System.Diagnostics.Debug.Assert(0
> > == perThread.doc.perDocTvf.Length());
> > +                    System.Diagnostics.Debug.Assert(0 ==
> > perThread.doc.perDocTvf.GetFilePointer());
> >                                }
> >                                else
> >                                {
> > @@ -125,8 +125,8 @@ namespace Lucene.Net.Index
> >
> >                        if (numPostings > maxNumPostings)
> >                                maxNumPostings = numPostings;
> > -
> > -                       IndexOutput tvf = perThread.doc.tvf;
> > +
> > +            IndexOutput tvf = perThread.doc.perDocTvf;
> >
> >                        // This is called once, after inverting all
> > occurences
> >                        // of a given field in the doc.  At this point we
> > flush
> >
> > Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Store/RAMFile.cs
> > URL:
> >
> http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Ne
> t/Store/RAMFile.cs?rev=1080881&r1=1080880&r2=1080881&view=diff
> >
> >
> ==========================================================================
> ====
> > ---
> incubator/lucene.net/trunk/C#/src/Lucene.Net/Store/RAMFile.cs(original)
> > +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Store/RAMFile.cs Sat
> Mar
> > 12 07:20:44 2011
> > @@ -19,23 +19,24 @@ using System;
> >
> >  namespace Lucene.Net.Store
> >  {
> > -
> > +
> > +    /** For Lucene internal use */
> >        [Serializable]
> >        public class RAMFile
> >        {
> >
> >                private const long serialVersionUID = 1L;
> >
> > -               private System.Collections.ArrayList buffers = new
> > System.Collections.ArrayList();
> > +               protected System.Collections.ArrayList buffers = new
> > System.Collections.ArrayList();
> >                internal long length;
> >                internal RAMDirectory directory;
> > -               internal long sizeInBytes;
> > +               protected internal long sizeInBytes;
> >
> >                // This is publicly modifiable via Directory.touchFile(),
> so
> > direct access not supported
> >                private long lastModified = (DateTime.Now.Ticks /
> > TimeSpan.TicksPerMillisecond);
> >
> >                // File used as buffer, in no RAMDirectory
> > -               public /*internal*/ RAMFile()
> > +        protected internal RAMFile()
> >                {
> >                }
> >
> > @@ -45,15 +46,15 @@ namespace Lucene.Net.Store
> >                }
> >
> >                // For non-stream access from thread that might be
> > concurrent with writing
> > -               public /*internal*/ virtual long GetLength()
> > +               public virtual long GetLength()
> >                {
> >                        lock (this)
> >                        {
> >                                return length;
> >                        }
> >                }
> > -
> > -               public /*internal*/ virtual void  SetLength(long length)
> > +
> > +        public /*internal*/ virtual void SetLength(long length)
> >                {
> >                        lock (this)
> >                        {
> > @@ -62,7 +63,7 @@ namespace Lucene.Net.Store
> >                }
> >
> >                // For non-stream access from thread that might be
> > concurrent with writing
> > -               internal virtual long GetLastModified()
> > +               public virtual long GetLastModified()
> >                {
> >                        lock (this)
> >                        {
> > @@ -70,7 +71,7 @@ namespace Lucene.Net.Store
> >                        }
> >                }
> >
> > -               internal virtual void  SetLastModified(long
> lastModified)
> > +               protected internal virtual void  SetLastModified(long
> > lastModified)
> >                {
> >                        lock (this)
> >                        {
> > @@ -78,7 +79,7 @@ namespace Lucene.Net.Store
> >                        }
> >                }
> >
> > -               internal byte[] AddBuffer(int size)
> > +               protected internal byte[] AddBuffer(int size)
> >                {
> >             byte[] buffer = NewBuffer(size);
> >             lock (this)
> > @@ -97,16 +98,16 @@ namespace Lucene.Net.Store
> >
> >             return buffer;
> >                }
> > -
> > -               public /*internal*/ byte[] GetBuffer(int index)
> > +
> > +        public /*internal*/ byte[] GetBuffer(int index)
> >                {
> >                        lock (this)
> >                        {
> >                                return (byte[]) buffers[index];
> >                        }
> >                }
> > -
> > -               public /*internal*/ int NumBuffers()
> > +
> > +        public /*internal*/ int NumBuffers()
> >                {
> >                        lock (this)
> >                        {
> > @@ -127,14 +128,11 @@ namespace Lucene.Net.Store
> >                }
> >
> >
> > -               public /*internal*/ virtual long GetSizeInBytes()
> > +               public virtual long GetSizeInBytes()
> >                {
> >             lock (this)
> >             {
> > -                lock (directory)
> > -                {
> > -                    return sizeInBytes;
> > -                }
> > +                return sizeInBytes;
> >             }
> >                }
> >
> >
> > Modified: incubator/
> > lucene.net/trunk/C#/src/Lucene.Net/Store/RAMOutputStream.cs
> > URL:
> >
> http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Ne
> t/Store/RAMOutputStream.cs?rev=1080881&r1=1080880&r2=1080881&view=diff
> >
> >
> ==========================================================================
> ====
> > ---
> incubator/lucene.net/trunk/C#/src/Lucene.Net/Store/RAMOutputStream.cs(orig
> inal)
> > +++
> incubator/lucene.net/trunk/C#/src/Lucene.Net/Store/RAMOutputStream.csSat
> Mar 12 07:20:44 2011
> > @@ -23,7 +23,7 @@ namespace Lucene.Net.Store
> >        /// <summary> A memory-resident {@link IndexOutput}
> implementation.
> >        ///
> >        /// </summary>
> > -       /// <version>  $Id: RAMOutputStream.java 691694 2008-09-03
> > 17:34:29Z mikemccand $
> > +    /// <version>  $Id: RAMOutputStream.java 941125 2010-05-05
> 00:44:15Z
> > mikemccand $
> >        /// </version>
> >
> >        public class RAMOutputStream:IndexOutput
> > @@ -44,7 +44,7 @@ namespace Lucene.Net.Store
> >                {
> >                }
> >
> > -               public /*internal*/ RAMOutputStream(RAMFile f)
> > +               public RAMOutputStream(RAMFile f)
> >                {
> >                        file = f;
> >
> > @@ -75,19 +75,14 @@ namespace Lucene.Net.Store
> >                        }
> >                }
> >
> > -               /// <summary>Resets this to an empty buffer. </summary>
> > +               /// <summary>Resets this to an empty file. </summary>
> >                public virtual void  Reset()
> > -               {
> > -                       try
> > -                       {
> > -                               Seek(0);
> > -                       }
> > -                       catch (System.IO.IOException e)
> > -                       {
> > -                               // should never happen
> > -                               throw new
> > System.SystemException(e.ToString());
> > -                       }
> > -
> > +        {
> > +            currentBuffer = null;
> > +            currentBufferIndex = -1;
> > +            bufferPosition = 0;
> > +            bufferStart = 0;
> > +            bufferLength = 0;
> >                        file.SetLength(0);
> >                }
> >
> >
> >
> >


Re: [Lucene.Net] Procedure for Commiting

Posted by Stefan Bodewig <bo...@apache.org>.
On 2011-03-15, Scott Lombard wrote:

> The only problem I found with the JIRA commit log is that if you also
> reference a Java Lucene issue it will be referenced in their JIRA as
> well.  I have been using a space instead of a dash.  Example
> LUCENE-222 would be LUCENE 222.

Another problem pops up if you mistype the ticket number and the commit
is associated with a different ticket.

You can change the log message when you realize this has happened
<http://subversion.apache.org/faq.html#change-log-msg> but I'm not sure
whether this helps with JIRA.

Stefan

RE: [Lucene.Net] Procedure for Commiting

Posted by Scott Lombard <lo...@gmail.com>.
The only problem I found with the JIRA commit log is that if you also
reference a Java Lucene issue it will be referenced in their JIRA as well.
I have been using a space instead of a dash.  Example LUCENE-222 would be
LUCENE 222.

Scott

> -----Original Message-----
> From: Troy Howard [mailto:thoward37@gmail.com]
> Sent: Monday, March 14, 2011 5:16 PM
> To: lucene-net-dev@lucene.apache.org
> Subject: Re: [Lucene.Net] Procedure for Commiting
> 
> I like the CTR workflow, but ensure that all commits have an
> associated JIRA item in the commit log so that it's tracked in JIRA
> correctly via the commits tab.
> 
> I also recently asked Infra to set us up with ReviewBoard (see
> reviews.apache.org)... However this tool is focused on RTC style
> workflow. We could use a combination depending on the situation.
> 
> Thanks,
> Troy
> 
> 
> On Mon, Mar 14, 2011 at 1:24 PM, Digy <di...@gmail.com> wrote:
> > OK, you are right. I didn't know this tab. I must have missed this
> feature
> > with the new version of JIRA(even I don't recall surely whether it was
> > available in previous version or not :) ).
> >
> > DIGY
> >
> > -----Original Message-----
> > From: Lombard, Scott [mailto:slombard@KINGINDUSTRIES.COM]
> > Sent: Monday, March 14, 2011 8:44 PM
> > To: lucene-net-dev@lucene.apache.org
> > Subject: RE: [Lucene.Net] Procedure for Commiting
> >
> > Yes it does.  Under the issue you can find a tab Subversion Commits.
> >
> > Scott
> >
> >
> >> -----Original Message-----
> >> From: Michael Herndon [mailto:mherndon@wickedsoftware.net]
> >> Sent: Monday, March 14, 2011 2:38 PM
> >> To: lucene-net-dev@lucene.apache.org
> >> Cc: Digy
> >> Subject: Re: [Lucene.Net] Procedure for Commiting
> >>
> >> Is Jira not currently setup to track the commit messages with the
> >> referenced
> >> ticket number in them?
> >>
> >> On Mon, Mar 14, 2011 at 12:59 PM, Digy <di...@gmail.com> wrote:
> >>
> >> > I don't know what others think, but I find it more trackable to see
> the
> >> > issues and related patches in one place(JIRA)
> >> > (especially, while trying to understand what was done for a specific
> >> issue;
> >> > after many months)
> >> >
> >> > DIGY
> >> >
> >> >
> >> > -----Original Message-----
> >> > From: Stefan Bodewig [mailto:bodewig@apache.org]
> >> > Sent: Monday, March 14, 2011 5:41 PM
> >> > To: lucene-net-dev@lucene.apache.org
> >> > Subject: Re: [Lucene.Net] Procedure for Commiting
> >> >
> >> > On 2011-03-14, Scott Lombard wrote:
> >> >
> >> > > I wanted to get a final agreement on how we want to handle commits
> to
> >> the
> >> > > repository.  There have been discussions in a couple of different
> >> threads
> >> > > about this topic.  I know patches, branches and just go for it has
> >> been
> >> > > discussed and different people have different ideas.  I just wanted
> to
> >> > know
> >> > > what the group thinks is the way to handle commits in our project.
> >> >
> >> > Inside the ASF we have varying ideas about how to handle it.
> >> >
> >> > Many if not most projects use commit-then-review (CTR[1]) as their
> main
> >> > model where you just commit and your peers review it later (that's
> why
> >> > the commits mailing list exists).  This is probably the quickest way
> to
> >> > move forward but may lead to slipped-through problems.
> >> >
> >> > At the other extreme there are projects that require JIRA items for
> each
> >> > and every commit with automated pre-build CI checks that reject
> patches
> >> > attached to JIRA tickets if they break the
> >> > build/tests/coding-standards/whatever[2].  This is probably the
> "safe"
> >> > way but may keep people from contributing because the effort to get a
> >> > patch in seems to big.
> >> >
> >> > Other projects live in some sort of middle ground where some branch
> is
> >> > open for CTR and other branches (the "stable" branch) requires
> >> > review-then-commit (RTC[3]).  Many projects have written down
> policies
> >> > like Hadoop which I've already cited or Solr[4].
> >> >
> >> > I guess what I'm trying to say is there is no policy that would force
> >> > you to do it one way or the other, it is your decision.
> >> >
> >> > Stefan
> >> >
> >> > [1] http://www.apache.org/foundation/glossary.html#CommitThenReview
> >> >
> >> > [2] http://wiki.apache.org/hadoop/HowToContribute
> >> >
> >> > [3] http://www.apache.org/foundation/glossary.html#ReviewThenCommit
> >> >
> >> > [4] http://wiki.apache.org/solr/CommitPolicy
> >> >
> >> >
> >
> >
> > This message (and any associated files) is intended only for the
> > use of the individual or entity to which it is addressed and may
> > contain information that is confidential, subject to copyright or
> > constitutes a trade secret. If you are not the intended recipient
> > you are hereby notified that any dissemination, copying or
> > distribution of this message, or files associated with this message,
> > is strictly prohibited. If you have received this message in error,
> > please notify us immediately by replying to the message and deleting
> > it from your computer.  Thank you, King Industries, Inc.
> >
> >


Re: [Lucene.Net] Procedure for Commiting

Posted by Troy Howard <th...@gmail.com>.
I like the CTR workflow, but ensure that all commits have an
associated JIRA item in the commit log so that it's tracked in JIRA
correctly via the commits tab.

I also recently asked Infra to set us up with ReviewBoard (see
reviews.apache.org)... However this tool is focused on RTC style
workflow. We could use a combination depending on the situation.

Thanks,
Troy


On Mon, Mar 14, 2011 at 1:24 PM, Digy <di...@gmail.com> wrote:
> OK, you are right. I didn't know this tab. I must have missed this feature
> with the new version of JIRA(even I don't recall surely whether it was
> available in previous version or not :) ).
>
> DIGY
>
> -----Original Message-----
> From: Lombard, Scott [mailto:slombard@KINGINDUSTRIES.COM]
> Sent: Monday, March 14, 2011 8:44 PM
> To: lucene-net-dev@lucene.apache.org
> Subject: RE: [Lucene.Net] Procedure for Commiting
>
> Yes it does.  Under the issue you can find a tab Subversion Commits.
>
> Scott
>
>
>> -----Original Message-----
>> From: Michael Herndon [mailto:mherndon@wickedsoftware.net]
>> Sent: Monday, March 14, 2011 2:38 PM
>> To: lucene-net-dev@lucene.apache.org
>> Cc: Digy
>> Subject: Re: [Lucene.Net] Procedure for Commiting
>>
>> Is Jira not currently setup to track the commit messages with the
>> referenced
>> ticket number in them?
>>
>> On Mon, Mar 14, 2011 at 12:59 PM, Digy <di...@gmail.com> wrote:
>>
>> > I don't know what others think, but I find it more trackable to see the
>> > issues and related patches in one place(JIRA)
>> > (especially, while trying to understand what was done for a specific
>> issue;
>> > after many months)
>> >
>> > DIGY
>> >
>> >
>> > -----Original Message-----
>> > From: Stefan Bodewig [mailto:bodewig@apache.org]
>> > Sent: Monday, March 14, 2011 5:41 PM
>> > To: lucene-net-dev@lucene.apache.org
>> > Subject: Re: [Lucene.Net] Procedure for Commiting
>> >
>> > On 2011-03-14, Scott Lombard wrote:
>> >
>> > > I wanted to get a final agreement on how we want to handle commits to
>> the
>> > > repository.  There have been discussions in a couple of different
>> threads
>> > > about this topic.  I know patches, branches and just go for it has
>> been
>> > > discussed and different people have different ideas.  I just wanted to
>> > know
>> > > what the group thinks is the way to handle commits in our project.
>> >
>> > Inside the ASF we have varying ideas about how to handle it.
>> >
>> > Many if not most projects use commit-then-review (CTR[1]) as their main
>> > model where you just commit and your peers review it later (that's why
>> > the commits mailing list exists).  This is probably the quickest way to
>> > move forward but may lead to slipped-through problems.
>> >
>> > At the other extreme there are projects that require JIRA items for each
>> > and every commit with automated pre-build CI checks that reject patches
>> > attached to JIRA tickets if they break the
>> > build/tests/coding-standards/whatever[2].  This is probably the "safe"
>> > way but may keep people from contributing because the effort to get a
>> > patch in seems to big.
>> >
>> > Other projects live in some sort of middle ground where some branch is
>> > open for CTR and other branches (the "stable" branch) requires
>> > review-then-commit (RTC[3]).  Many projects have written down policies
>> > like Hadoop which I've already cited or Solr[4].
>> >
>> > I guess what I'm trying to say is there is no policy that would force
>> > you to do it one way or the other, it is your decision.
>> >
>> > Stefan
>> >
>> > [1] http://www.apache.org/foundation/glossary.html#CommitThenReview
>> >
>> > [2] http://wiki.apache.org/hadoop/HowToContribute
>> >
>> > [3] http://www.apache.org/foundation/glossary.html#ReviewThenCommit
>> >
>> > [4] http://wiki.apache.org/solr/CommitPolicy
>> >
>> >
>
>
> This message (and any associated files) is intended only for the
> use of the individual or entity to which it is addressed and may
> contain information that is confidential, subject to copyright or
> constitutes a trade secret. If you are not the intended recipient
> you are hereby notified that any dissemination, copying or
> distribution of this message, or files associated with this message,
> is strictly prohibited. If you have received this message in error,
> please notify us immediately by replying to the message and deleting
> it from your computer.  Thank you, King Industries, Inc.
>
>

RE: [Lucene.Net] Procedure for Commiting

Posted by Digy <di...@gmail.com>.
OK, you are right. I didn't know this tab. I must have missed this feature
with the new version of JIRA(even I don't recall surely whether it was
available in previous version or not :) ).

DIGY

-----Original Message-----
From: Lombard, Scott [mailto:slombard@KINGINDUSTRIES.COM] 
Sent: Monday, March 14, 2011 8:44 PM
To: lucene-net-dev@lucene.apache.org
Subject: RE: [Lucene.Net] Procedure for Commiting

Yes it does.  Under the issue you can find a tab Subversion Commits.

Scott


> -----Original Message-----
> From: Michael Herndon [mailto:mherndon@wickedsoftware.net]
> Sent: Monday, March 14, 2011 2:38 PM
> To: lucene-net-dev@lucene.apache.org
> Cc: Digy
> Subject: Re: [Lucene.Net] Procedure for Commiting
>
> Is Jira not currently setup to track the commit messages with the
> referenced
> ticket number in them?
>
> On Mon, Mar 14, 2011 at 12:59 PM, Digy <di...@gmail.com> wrote:
>
> > I don't know what others think, but I find it more trackable to see the
> > issues and related patches in one place(JIRA)
> > (especially, while trying to understand what was done for a specific
> issue;
> > after many months)
> >
> > DIGY
> >
> >
> > -----Original Message-----
> > From: Stefan Bodewig [mailto:bodewig@apache.org]
> > Sent: Monday, March 14, 2011 5:41 PM
> > To: lucene-net-dev@lucene.apache.org
> > Subject: Re: [Lucene.Net] Procedure for Commiting
> >
> > On 2011-03-14, Scott Lombard wrote:
> >
> > > I wanted to get a final agreement on how we want to handle commits to
> the
> > > repository.  There have been discussions in a couple of different
> threads
> > > about this topic.  I know patches, branches and just go for it has
> been
> > > discussed and different people have different ideas.  I just wanted to
> > know
> > > what the group thinks is the way to handle commits in our project.
> >
> > Inside the ASF we have varying ideas about how to handle it.
> >
> > Many if not most projects use commit-then-review (CTR[1]) as their main
> > model where you just commit and your peers review it later (that's why
> > the commits mailing list exists).  This is probably the quickest way to
> > move forward but may lead to slipped-through problems.
> >
> > At the other extreme there are projects that require JIRA items for each
> > and every commit with automated pre-build CI checks that reject patches
> > attached to JIRA tickets if they break the
> > build/tests/coding-standards/whatever[2].  This is probably the "safe"
> > way but may keep people from contributing because the effort to get a
> > patch in seems to big.
> >
> > Other projects live in some sort of middle ground where some branch is
> > open for CTR and other branches (the "stable" branch) requires
> > review-then-commit (RTC[3]).  Many projects have written down policies
> > like Hadoop which I've already cited or Solr[4].
> >
> > I guess what I'm trying to say is there is no policy that would force
> > you to do it one way or the other, it is your decision.
> >
> > Stefan
> >
> > [1] http://www.apache.org/foundation/glossary.html#CommitThenReview
> >
> > [2] http://wiki.apache.org/hadoop/HowToContribute
> >
> > [3] http://www.apache.org/foundation/glossary.html#ReviewThenCommit
> >
> > [4] http://wiki.apache.org/solr/CommitPolicy
> >
> >


This message (and any associated files) is intended only for the
use of the individual or entity to which it is addressed and may
contain information that is confidential, subject to copyright or
constitutes a trade secret. If you are not the intended recipient
you are hereby notified that any dissemination, copying or
distribution of this message, or files associated with this message,
is strictly prohibited. If you have received this message in error,
please notify us immediately by replying to the message and deleting
it from your computer.  Thank you, King Industries, Inc.


RE: [Lucene.Net] Procedure for Commiting

Posted by "Lombard, Scott" <sl...@KINGINDUSTRIES.COM>.
Yes it does.  Under the issue you can find a tab Subversion Commits.

Scott


> -----Original Message-----
> From: Michael Herndon [mailto:mherndon@wickedsoftware.net]
> Sent: Monday, March 14, 2011 2:38 PM
> To: lucene-net-dev@lucene.apache.org
> Cc: Digy
> Subject: Re: [Lucene.Net] Procedure for Commiting
>
> Is Jira not currently setup to track the commit messages with the
> referenced
> ticket number in them?
>
> On Mon, Mar 14, 2011 at 12:59 PM, Digy <di...@gmail.com> wrote:
>
> > I don't know what others think, but I find it more trackable to see the
> > issues and related patches in one place(JIRA)
> > (especially, while trying to understand what was done for a specific
> issue;
> > after many months)
> >
> > DIGY
> >
> >
> > -----Original Message-----
> > From: Stefan Bodewig [mailto:bodewig@apache.org]
> > Sent: Monday, March 14, 2011 5:41 PM
> > To: lucene-net-dev@lucene.apache.org
> > Subject: Re: [Lucene.Net] Procedure for Commiting
> >
> > On 2011-03-14, Scott Lombard wrote:
> >
> > > I wanted to get a final agreement on how we want to handle commits to
> the
> > > repository.  There have been discussions in a couple of different
> threads
> > > about this topic.  I know patches, branches and just go for it has
> been
> > > discussed and different people have different ideas.  I just wanted to
> > know
> > > what the group thinks is the way to handle commits in our project.
> >
> > Inside the ASF we have varying ideas about how to handle it.
> >
> > Many if not most projects use commit-then-review (CTR[1]) as their main
> > model where you just commit and your peers review it later (that's why
> > the commits mailing list exists).  This is probably the quickest way to
> > move forward but may lead to slipped-through problems.
> >
> > At the other extreme there are projects that require JIRA items for each
> > and every commit with automated pre-build CI checks that reject patches
> > attached to JIRA tickets if they break the
> > build/tests/coding-standards/whatever[2].  This is probably the "safe"
> > way but may keep people from contributing because the effort to get a
> > patch in seems to big.
> >
> > Other projects live in some sort of middle ground where some branch is
> > open for CTR and other branches (the "stable" branch) requires
> > review-then-commit (RTC[3]).  Many projects have written down policies
> > like Hadoop which I've already cited or Solr[4].
> >
> > I guess what I'm trying to say is there is no policy that would force
> > you to do it one way or the other, it is your decision.
> >
> > Stefan
> >
> > [1] http://www.apache.org/foundation/glossary.html#CommitThenReview
> >
> > [2] http://wiki.apache.org/hadoop/HowToContribute
> >
> > [3] http://www.apache.org/foundation/glossary.html#ReviewThenCommit
> >
> > [4] http://wiki.apache.org/solr/CommitPolicy
> >
> >


This message (and any associated files) is intended only for the
use of the individual or entity to which it is addressed and may
contain information that is confidential, subject to copyright or
constitutes a trade secret. If you are not the intended recipient
you are hereby notified that any dissemination, copying or
distribution of this message, or files associated with this message,
is strictly prohibited. If you have received this message in error,
please notify us immediately by replying to the message and deleting
it from your computer.  Thank you, King Industries, Inc.

Re: [Lucene.Net] Procedure for Commiting

Posted by Michael Herndon <mh...@wickedsoftware.net>.
Is Jira not currently setup to track the commit messages with the referenced
ticket number in them?

On Mon, Mar 14, 2011 at 12:59 PM, Digy <di...@gmail.com> wrote:

> I don't know what others think, but I find it more trackable to see the
> issues and related patches in one place(JIRA)
> (especially, while trying to understand what was done for a specific issue;
> after many months)
>
> DIGY
>
>
> -----Original Message-----
> From: Stefan Bodewig [mailto:bodewig@apache.org]
> Sent: Monday, March 14, 2011 5:41 PM
> To: lucene-net-dev@lucene.apache.org
> Subject: Re: [Lucene.Net] Procedure for Commiting
>
> On 2011-03-14, Scott Lombard wrote:
>
> > I wanted to get a final agreement on how we want to handle commits to the
> > repository.  There have been discussions in a couple of different threads
> > about this topic.  I know patches, branches and just go for it has been
> > discussed and different people have different ideas.  I just wanted to
> know
> > what the group thinks is the way to handle commits in our project.
>
> Inside the ASF we have varying ideas about how to handle it.
>
> Many if not most projects use commit-then-review (CTR[1]) as their main
> model where you just commit and your peers review it later (that's why
> the commits mailing list exists).  This is probably the quickest way to
> move forward but may lead to slipped-through problems.
>
> At the other extreme there are projects that require JIRA items for each
> and every commit with automated pre-build CI checks that reject patches
> attached to JIRA tickets if they break the
> build/tests/coding-standards/whatever[2].  This is probably the "safe"
> way but may keep people from contributing because the effort to get a
> patch in seems to big.
>
> Other projects live in some sort of middle ground where some branch is
> open for CTR and other branches (the "stable" branch) requires
> review-then-commit (RTC[3]).  Many projects have written down policies
> like Hadoop which I've already cited or Solr[4].
>
> I guess what I'm trying to say is there is no policy that would force
> you to do it one way or the other, it is your decision.
>
> Stefan
>
> [1] http://www.apache.org/foundation/glossary.html#CommitThenReview
>
> [2] http://wiki.apache.org/hadoop/HowToContribute
>
> [3] http://www.apache.org/foundation/glossary.html#ReviewThenCommit
>
> [4] http://wiki.apache.org/solr/CommitPolicy
>
>

RE: [Lucene.Net] Procedure for Commiting

Posted by Digy <di...@gmail.com>.
I don't know what others think, but I find it more trackable to see the
issues and related patches in one place(JIRA)
(especially, while trying to understand what was done for a specific issue;
after many months)

DIGY


-----Original Message-----
From: Stefan Bodewig [mailto:bodewig@apache.org] 
Sent: Monday, March 14, 2011 5:41 PM
To: lucene-net-dev@lucene.apache.org
Subject: Re: [Lucene.Net] Procedure for Commiting

On 2011-03-14, Scott Lombard wrote:

> I wanted to get a final agreement on how we want to handle commits to the
> repository.  There have been discussions in a couple of different threads
> about this topic.  I know patches, branches and just go for it has been
> discussed and different people have different ideas.  I just wanted to
know
> what the group thinks is the way to handle commits in our project.

Inside the ASF we have varying ideas about how to handle it.

Many if not most projects use commit-then-review (CTR[1]) as their main
model where you just commit and your peers review it later (that's why
the commits mailing list exists).  This is probably the quickest way to
move forward but may lead to slipped-through problems.

At the other extreme there are projects that require JIRA items for each
and every commit with automated pre-build CI checks that reject patches
attached to JIRA tickets if they break the
build/tests/coding-standards/whatever[2].  This is probably the "safe"
way but may keep people from contributing because the effort to get a
patch in seems to big.

Other projects live in some sort of middle ground where some branch is
open for CTR and other branches (the "stable" branch) requires
review-then-commit (RTC[3]).  Many projects have written down policies
like Hadoop which I've already cited or Solr[4].

I guess what I'm trying to say is there is no policy that would force
you to do it one way or the other, it is your decision.

Stefan

[1] http://www.apache.org/foundation/glossary.html#CommitThenReview

[2] http://wiki.apache.org/hadoop/HowToContribute

[3] http://www.apache.org/foundation/glossary.html#ReviewThenCommit

[4] http://wiki.apache.org/solr/CommitPolicy


Re: [Lucene.Net] Procedure for Commiting

Posted by Stefan Bodewig <bo...@apache.org>.
On 2011-03-14, Scott Lombard wrote:

> I wanted to get a final agreement on how we want to handle commits to the
> repository.  There have been discussions in a couple of different threads
> about this topic.  I know patches, branches and just go for it has been
> discussed and different people have different ideas.  I just wanted to know
> what the group thinks is the way to handle commits in our project.

Inside the ASF we have varying ideas about how to handle it.

Many if not most projects use commit-then-review (CTR[1]) as their main
model where you just commit and your peers review it later (that's why
the commits mailing list exists).  This is probably the quickest way to
move forward but may lead to slipped-through problems.

At the other extreme there are projects that require JIRA items for each
and every commit with automated pre-build CI checks that reject patches
attached to JIRA tickets if they break the
build/tests/coding-standards/whatever[2].  This is probably the "safe"
way but may keep people from contributing because the effort to get a
patch in seems to big.

Other projects live in some sort of middle ground where some branch is
open for CTR and other branches (the "stable" branch) requires
review-then-commit (RTC[3]).  Many projects have written down policies
like Hadoop which I've already cited or Solr[4].

I guess what I'm trying to say is there is no policy that would force
you to do it one way or the other, it is your decision.

Stefan

[1] http://www.apache.org/foundation/glossary.html#CommitThenReview

[2] http://wiki.apache.org/hadoop/HowToContribute

[3] http://www.apache.org/foundation/glossary.html#ReviewThenCommit

[4] http://wiki.apache.org/solr/CommitPolicy