You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@couchdb.apache.org by faust 1111 <fa...@gmail.com> on 2010/03/28 00:56:39 UTC

Validate uniqueness field

Hi

In what way i cat implement validation of uniqueness?
User
  email: unique
  login:  unique

Thanks!

Re: Validate uniqueness field

Posted by J Chris Anderson <jc...@gmail.com>.
On Mar 28, 2010, at 8:30 AM, faust 1111 wrote:

>> If you need multi-document transactions, multiple constraints and
>> don't need to be distributed, don't need to shard, and don't need
>> offline replication, it's not clear that CouchDB is a good fit.
> 
> 
> I want use Couch for Web App Media portal
> is couch good for me
> i don`t need to be distributed, shard
> 

Yes you can use Couch for this. It may require some rethinking of your domain model. The upside is once you learn the constraints CouchDB puts on you, you'll have a deeper understanding of how a lot of the big shops do it. (No foreign-key constraints, no joins, as many cacheable resources as possible). Another upside is that your data will be offline-capable. Even if you don't see the value now, you'll see it eventually, and no other database offers offline mode.

> 
> i have Users, media content(audio, video, photo) tags categories
> marks, and simple social network
> many large files store in couch.
> 

Couch will handle large files gracefully.

> I have no experience with couch but i think
> it give me tool for manipulate data in simple way.
> schema-less + Map/Reduce query - i think its cool.
> 

If you try hard to build it in a similar manner as you'd build it with a relational database, you'll continually run into roadblocks. If you are happy from the outset to think in terms of documents instead, your software will be much more couchy, and take advantage of the simplicity Couch has to offer.

There are some queries that require more work with Couch, but there are other places which are much simpler.

If your current project seems to hard to move to Couch, you might find it more relaxing to build something simpler first, and then try bigger projects when you are more comfortable with the document model.

Chris

> 
> 
> 
> 
> 
> 2010/3/28 Robert Newson <ro...@gmail.com>:
>> "not every application requires to be extremely distributed" -- for
>> everything else, there's CouchDB. :)
>> 
>> I completely agree that not every application needs to be distributed.
>> For applications that are relational in shape, it makes sense to use a
>> relational database solution. Conversely, for something very
>> relational, it makes no sense to use a non-relational database
>> solution.
>> 
>> If you need multi-document transactions, multiple constraints and
>> don't need to be distributed, don't need to shard, and don't need
>> offline replication, it's not clear that CouchDB is a good fit. On the
>> other hand, if your application is a natural fit for CouchDB, then
>> you'll also be able to scale up when needed.
>> 
>> Bulk update used to work in the way you are suggesting and this
>> behavior was removed because it cannot work the same way when you
>> replicate or shard. I suspect that trend will continue rather than
>> reverse.
>> 
>> B.
>> 
>> On Sun, Mar 28, 2010 at 2:03 PM, Alexander Uvarov
>> <al...@gmail.com> wrote:
>>> Not every document requires locking. Also, not every application requires to be extremely distributed. Developers can make a decision what kind of application is cooking, in other words, should it scale to thousands of nodes, or just single master with many readers would pretty good.
>>> The same about transactions, offline replication not ever required. An option to reject on conflict during bulk update would be really helpful (just use single master), but there are obvious problems with sharding coming :(.
>>> 
>>> On 28.03.2010, at 18:40, Robert Newson wrote:
>>> 
>>>> "I am wondering why not introduce locking in couchdb"
>>>> 
>>>> It's because locking doesn't scale. The locking strategy you outlined
>>>> works fine when your database runs on one machine, but fails when it
>>>> runs on two or more machines. A distributed lock, while possible,
>>>> would require all machines to lock, which requires them all to be up,
>>>> and, of course, ten machines are then no faster than one machine. Most
>>>> distributed locking protocols are blocking (like the usual 2PC
>>>> protocol), the non-blocking ones are either more overhead (3PC) or
>>>> more complex (Paxos).
>>>> 
>>>> CouchDB doesn't let you do with one machine that which won't work when
>>>> you have ten machines. It's quite deliberately not letting you do
>>>> something that won't scale.
>>>> 
>>>> B.
>>>> 
>>>> On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
>>>> <al...@gmail.com> wrote:
>>>>> 
>>>>> On 28.03.2010, at 14:40, faust 1111 wrote:
>>>>> 
>>>>>> Its sounds like pair of crutches ;)
>>>>> 
>>>>> Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory tickets" sounds insane.
>>>>> 
>>>>> I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx operations to lock desired documents, or just lock "User" by giving this string as key in Redis to lock whole User type. Then just try to lock "User", create your User document, unlock. If there is already a lock, wait and try again. But deadlocks are possible when process that owned the lock is dead and no one can release a lock.
>>>>> Redis commands: http://code.google.com/p/redis/wiki/MsetCommand
>>>>> 
>>>>> I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely fast, but there are also real world problems. Awesome technology, I am crying that such restrictions taking it away.
>>> 
>>> 
>> 


Re: Validate uniqueness field

Posted by faust 1111 <fa...@gmail.com>.
> If you need multi-document transactions, multiple constraints and
> don't need to be distributed, don't need to shard, and don't need
> offline replication, it's not clear that CouchDB is a good fit.


I want use Couch for Web App Media portal
is couch good for me
i don`t need to be distributed, shard


i have Users, media content(audio, video, photo) tags categories
marks, and simple social network
many large files store in couch.

I have no experience with couch but i think
it give me tool for manipulate data in simple way.
schema-less + Map/Reduce query - i think its cool.






2010/3/28 Robert Newson <ro...@gmail.com>:
> "not every application requires to be extremely distributed" -- for
> everything else, there's CouchDB. :)
>
> I completely agree that not every application needs to be distributed.
> For applications that are relational in shape, it makes sense to use a
> relational database solution. Conversely, for something very
> relational, it makes no sense to use a non-relational database
> solution.
>
> If you need multi-document transactions, multiple constraints and
> don't need to be distributed, don't need to shard, and don't need
> offline replication, it's not clear that CouchDB is a good fit. On the
> other hand, if your application is a natural fit for CouchDB, then
> you'll also be able to scale up when needed.
>
> Bulk update used to work in the way you are suggesting and this
> behavior was removed because it cannot work the same way when you
> replicate or shard. I suspect that trend will continue rather than
> reverse.
>
> B.
>
> On Sun, Mar 28, 2010 at 2:03 PM, Alexander Uvarov
> <al...@gmail.com> wrote:
>> Not every document requires locking. Also, not every application requires to be extremely distributed. Developers can make a decision what kind of application is cooking, in other words, should it scale to thousands of nodes, or just single master with many readers would pretty good.
>> The same about transactions, offline replication not ever required. An option to reject on conflict during bulk update would be really helpful (just use single master), but there are obvious problems with sharding coming :(.
>>
>> On 28.03.2010, at 18:40, Robert Newson wrote:
>>
>>> "I am wondering why not introduce locking in couchdb"
>>>
>>> It's because locking doesn't scale. The locking strategy you outlined
>>> works fine when your database runs on one machine, but fails when it
>>> runs on two or more machines. A distributed lock, while possible,
>>> would require all machines to lock, which requires them all to be up,
>>> and, of course, ten machines are then no faster than one machine. Most
>>> distributed locking protocols are blocking (like the usual 2PC
>>> protocol), the non-blocking ones are either more overhead (3PC) or
>>> more complex (Paxos).
>>>
>>> CouchDB doesn't let you do with one machine that which won't work when
>>> you have ten machines. It's quite deliberately not letting you do
>>> something that won't scale.
>>>
>>> B.
>>>
>>> On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
>>> <al...@gmail.com> wrote:
>>>>
>>>> On 28.03.2010, at 14:40, faust 1111 wrote:
>>>>
>>>>> Its sounds like pair of crutches ;)
>>>>
>>>> Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory tickets" sounds insane.
>>>>
>>>> I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx operations to lock desired documents, or just lock "User" by giving this string as key in Redis to lock whole User type. Then just try to lock "User", create your User document, unlock. If there is already a lock, wait and try again. But deadlocks are possible when process that owned the lock is dead and no one can release a lock.
>>>> Redis commands: http://code.google.com/p/redis/wiki/MsetCommand
>>>>
>>>> I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely fast, but there are also real world problems. Awesome technology, I am crying that such restrictions taking it away.
>>
>>
>

Re: Validate uniqueness field

Posted by Jan Lehnardt <ja...@apache.org>.
On 28 Mar 2010, at 06:03, Alexander Uvarov wrote:

> Not every document requires locking. Also, not every application requires to be extremely distributed. Developers can make a decision what kind of application is cooking, in other words, should it scale to thousands of nodes, or just single master with many readers would pretty good.
> The same about transactions, offline replication not ever required. An option to reject on conflict during bulk update would be really helpful (just use single master), but there are obvious problems with sharding coming :(.

I side Robert here. If you figure out that CouchDB doesn't 
solve your problems well enough, it might not be the best
fit for your project.

I'm not telling you to go away, but invite you to rethink
what primitives your app needs rather than what you are
used to from other systems and see if CouchDB still
works for you.

It may not and that is just fine :)

Cheers
Jan
--



> 
> On 28.03.2010, at 18:40, Robert Newson wrote:
> 
>> "I am wondering why not introduce locking in couchdb"
>> 
>> It's because locking doesn't scale. The locking strategy you outlined
>> works fine when your database runs on one machine, but fails when it
>> runs on two or more machines. A distributed lock, while possible,
>> would require all machines to lock, which requires them all to be up,
>> and, of course, ten machines are then no faster than one machine. Most
>> distributed locking protocols are blocking (like the usual 2PC
>> protocol), the non-blocking ones are either more overhead (3PC) or
>> more complex (Paxos).
>> 
>> CouchDB doesn't let you do with one machine that which won't work when
>> you have ten machines. It's quite deliberately not letting you do
>> something that won't scale.
>> 
>> B.
>> 
>> On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
>> <al...@gmail.com> wrote:
>>> 
>>> On 28.03.2010, at 14:40, faust 1111 wrote:
>>> 
>>>> Its sounds like pair of crutches ;)
>>> 
>>> Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory tickets" sounds insane.
>>> 
>>> I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx operations to lock desired documents, or just lock "User" by giving this string as key in Redis to lock whole User type. Then just try to lock "User", create your User document, unlock. If there is already a lock, wait and try again. But deadlocks are possible when process that owned the lock is dead and no one can release a lock.
>>> Redis commands: http://code.google.com/p/redis/wiki/MsetCommand
>>> 
>>> I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely fast, but there are also real world problems. Awesome technology, I am crying that such restrictions taking it away.
> 


Re: Validate uniqueness field

Posted by Robert Newson <ro...@gmail.com>.
"not every application requires to be extremely distributed" -- for
everything else, there's CouchDB. :)

I completely agree that not every application needs to be distributed.
For applications that are relational in shape, it makes sense to use a
relational database solution. Conversely, for something very
relational, it makes no sense to use a non-relational database
solution.

If you need multi-document transactions, multiple constraints and
don't need to be distributed, don't need to shard, and don't need
offline replication, it's not clear that CouchDB is a good fit. On the
other hand, if your application is a natural fit for CouchDB, then
you'll also be able to scale up when needed.

Bulk update used to work in the way you are suggesting and this
behavior was removed because it cannot work the same way when you
replicate or shard. I suspect that trend will continue rather than
reverse.

B.

On Sun, Mar 28, 2010 at 2:03 PM, Alexander Uvarov
<al...@gmail.com> wrote:
> Not every document requires locking. Also, not every application requires to be extremely distributed. Developers can make a decision what kind of application is cooking, in other words, should it scale to thousands of nodes, or just single master with many readers would pretty good.
> The same about transactions, offline replication not ever required. An option to reject on conflict during bulk update would be really helpful (just use single master), but there are obvious problems with sharding coming :(.
>
> On 28.03.2010, at 18:40, Robert Newson wrote:
>
>> "I am wondering why not introduce locking in couchdb"
>>
>> It's because locking doesn't scale. The locking strategy you outlined
>> works fine when your database runs on one machine, but fails when it
>> runs on two or more machines. A distributed lock, while possible,
>> would require all machines to lock, which requires them all to be up,
>> and, of course, ten machines are then no faster than one machine. Most
>> distributed locking protocols are blocking (like the usual 2PC
>> protocol), the non-blocking ones are either more overhead (3PC) or
>> more complex (Paxos).
>>
>> CouchDB doesn't let you do with one machine that which won't work when
>> you have ten machines. It's quite deliberately not letting you do
>> something that won't scale.
>>
>> B.
>>
>> On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
>> <al...@gmail.com> wrote:
>>>
>>> On 28.03.2010, at 14:40, faust 1111 wrote:
>>>
>>>> Its sounds like pair of crutches ;)
>>>
>>> Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory tickets" sounds insane.
>>>
>>> I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx operations to lock desired documents, or just lock "User" by giving this string as key in Redis to lock whole User type. Then just try to lock "User", create your User document, unlock. If there is already a lock, wait and try again. But deadlocks are possible when process that owned the lock is dead and no one can release a lock.
>>> Redis commands: http://code.google.com/p/redis/wiki/MsetCommand
>>>
>>> I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely fast, but there are also real world problems. Awesome technology, I am crying that such restrictions taking it away.
>
>

Re: Validate uniqueness field

Posted by Alexander Uvarov <al...@gmail.com>.
Not every document requires locking. Also, not every application requires to be extremely distributed. Developers can make a decision what kind of application is cooking, in other words, should it scale to thousands of nodes, or just single master with many readers would pretty good.
The same about transactions, offline replication not ever required. An option to reject on conflict during bulk update would be really helpful (just use single master), but there are obvious problems with sharding coming :(.

On 28.03.2010, at 18:40, Robert Newson wrote:

> "I am wondering why not introduce locking in couchdb"
> 
> It's because locking doesn't scale. The locking strategy you outlined
> works fine when your database runs on one machine, but fails when it
> runs on two or more machines. A distributed lock, while possible,
> would require all machines to lock, which requires them all to be up,
> and, of course, ten machines are then no faster than one machine. Most
> distributed locking protocols are blocking (like the usual 2PC
> protocol), the non-blocking ones are either more overhead (3PC) or
> more complex (Paxos).
> 
> CouchDB doesn't let you do with one machine that which won't work when
> you have ten machines. It's quite deliberately not letting you do
> something that won't scale.
> 
> B.
> 
> On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
> <al...@gmail.com> wrote:
>> 
>> On 28.03.2010, at 14:40, faust 1111 wrote:
>> 
>>> Its sounds like pair of crutches ;)
>> 
>> Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory tickets" sounds insane.
>> 
>> I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx operations to lock desired documents, or just lock "User" by giving this string as key in Redis to lock whole User type. Then just try to lock "User", create your User document, unlock. If there is already a lock, wait and try again. But deadlocks are possible when process that owned the lock is dead and no one can release a lock.
>> Redis commands: http://code.google.com/p/redis/wiki/MsetCommand
>> 
>> I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely fast, but there are also real world problems. Awesome technology, I am crying that such restrictions taking it away.


Re: Validate uniqueness field

Posted by Robert Newson <ro...@gmail.com>.
"I am wondering why not introduce locking in couchdb"

It's because locking doesn't scale. The locking strategy you outlined
works fine when your database runs on one machine, but fails when it
runs on two or more machines. A distributed lock, while possible,
would require all machines to lock, which requires them all to be up,
and, of course, ten machines are then no faster than one machine. Most
distributed locking protocols are blocking (like the usual 2PC
protocol), the non-blocking ones are either more overhead (3PC) or
more complex (Paxos).

CouchDB doesn't let you do with one machine that which won't work when
you have ten machines. It's quite deliberately not letting you do
something that won't scale.

B.

On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
<al...@gmail.com> wrote:
>
> On 28.03.2010, at 14:40, faust 1111 wrote:
>
>> Its sounds like pair of crutches ;)
>
> Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory tickets" sounds insane.
>
> I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx operations to lock desired documents, or just lock "User" by giving this string as key in Redis to lock whole User type. Then just try to lock "User", create your User document, unlock. If there is already a lock, wait and try again. But deadlocks are possible when process that owned the lock is dead and no one can release a lock.
> Redis commands: http://code.google.com/p/redis/wiki/MsetCommand
>
> I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely fast, but there are also real world problems. Awesome technology, I am crying that such restrictions taking it away.

Re: Validate uniqueness field

Posted by Alexander Uvarov <al...@gmail.com>.
On 28.03.2010, at 14:40, faust 1111 wrote:

> Its sounds like pair of crutches ;)

Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory tickets" sounds insane.

I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx operations to lock desired documents, or just lock "User" by giving this string as key in Redis to lock whole User type. Then just try to lock "User", create your User document, unlock. If there is already a lock, wait and try again. But deadlocks are possible when process that owned the lock is dead and no one can release a lock.
Redis commands: http://code.google.com/p/redis/wiki/MsetCommand

I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely fast, but there are also real world problems. Awesome technology, I am crying that such restrictions taking it away.

Re: Validate uniqueness field

Posted by faust 1111 <fa...@gmail.com>.
Its sounds like pair of crutches ;)

2010/3/28 Jan Lehnardt <ja...@apache.org>:
> The only solution to enforce uniqueness on a field is using the _id field of a document. If you need two fields to be unique in a database, you'll need to use two documents for that.
>
> In addition, in the distributed case, the only way to ensure uniqueness is eventually, after replication, through conflicts that show up if two nodes created the same "unique" id.
>
> Cheers
> Jan
> --
>
> On 27 Mar 2010, at 20:41, faust 1111 wrote:
>
>> Why too documents?
>> But i have one issue User
>>  i need only one document .
>>
>> i am interesting, how couch people do in real projects.
>> when they need  two unique fields in document.
>>
>>
>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>>> You need to have two documents with a unique ID each.
>>>
>>> Cheers
>>> Jan
>>> --
>>>
>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>>>
>>>> but what if i have two unique fields
>>>>  login
>>>>  email
>>>>
>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>>>
>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> In what way i cat implement validation of uniqueness?
>>>>>> User
>>>>>>  email: unique
>>>>>>  login:  unique
>>>>>>
>>>>>
>>>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>>>
>>>>> {
>>>>> "_id" : "user:unique",
>>>>>   ...
>>>>> }
>>>>>
>>>>> Chris
>>>
>>>
>
>

Re: Validate uniqueness field

Posted by Adam Petty <ad...@gmail.com>.
In other words - this isn't really a Database Concern is it?

The Database doesn't care if these fields are unique - only your
application and in certain cases?

If your app needs certain cases as unique - create a method to update
a Unique Doc... that stores all unique ID's already used, or multiple
--

A touch more code, but building in scalability from the get go most likely.

--IMHO

On Thu, Apr 1, 2010 at 2:51 PM, faust 1111 <fa...@gmail.com> wrote:
> But in bulk updates its impossible to track uniqueness.
> only when i update single doc i can before save delete old
> uniq_doc(with _id = uniq_field_value) and create new.
>
> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>> The only solution to enforce uniqueness on a field is using the _id field of a document. If you need two fields to be unique in a database, you'll need to use two documents for that.
>>
>> In addition, in the distributed case, the only way to ensure uniqueness is eventually, after replication, through conflicts that show up if two nodes created the same "unique" id.
>>
>> Cheers
>> Jan
>> --
>>
>> On 27 Mar 2010, at 20:41, faust 1111 wrote:
>>
>>> Why too documents?
>>> But i have one issue User
>>>  i need only one document .
>>>
>>> i am interesting, how couch people do in real projects.
>>> when they need  two unique fields in document.
>>>
>>>
>>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>>>> You need to have two documents with a unique ID each.
>>>>
>>>> Cheers
>>>> Jan
>>>> --
>>>>
>>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>>>>
>>>>> but what if i have two unique fields
>>>>>  login
>>>>>  email
>>>>>
>>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>>>>
>>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> In what way i cat implement validation of uniqueness?
>>>>>>> User
>>>>>>>  email: unique
>>>>>>>  login:  unique
>>>>>>>
>>>>>>
>>>>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>>>>
>>>>>> {
>>>>>> "_id" : "user:unique",
>>>>>>   ...
>>>>>> }
>>>>>>
>>>>>> Chris
>>>>
>>>>
>>
>>
>

Re: Validate uniqueness field

Posted by faust 1111 <fa...@gmail.com>.
But in bulk updates its impossible to track uniqueness.
only when i update single doc i can before save delete old
uniq_doc(with _id = uniq_field_value) and create new.

2010/3/28 Jan Lehnardt <ja...@apache.org>:
> The only solution to enforce uniqueness on a field is using the _id field of a document. If you need two fields to be unique in a database, you'll need to use two documents for that.
>
> In addition, in the distributed case, the only way to ensure uniqueness is eventually, after replication, through conflicts that show up if two nodes created the same "unique" id.
>
> Cheers
> Jan
> --
>
> On 27 Mar 2010, at 20:41, faust 1111 wrote:
>
>> Why too documents?
>> But i have one issue User
>>  i need only one document .
>>
>> i am interesting, how couch people do in real projects.
>> when they need  two unique fields in document.
>>
>>
>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>>> You need to have two documents with a unique ID each.
>>>
>>> Cheers
>>> Jan
>>> --
>>>
>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>>>
>>>> but what if i have two unique fields
>>>>  login
>>>>  email
>>>>
>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>>>
>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> In what way i cat implement validation of uniqueness?
>>>>>> User
>>>>>>  email: unique
>>>>>>  login:  unique
>>>>>>
>>>>>
>>>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>>>
>>>>> {
>>>>> "_id" : "user:unique",
>>>>>   ...
>>>>> }
>>>>>
>>>>> Chris
>>>
>>>
>
>

Re: Validate uniqueness field

Posted by Jan Lehnardt <ja...@apache.org>.
The only solution to enforce uniqueness on a field is using the _id field of a document. If you need two fields to be unique in a database, you'll need to use two documents for that.

In addition, in the distributed case, the only way to ensure uniqueness is eventually, after replication, through conflicts that show up if two nodes created the same "unique" id.

Cheers
Jan
--

On 27 Mar 2010, at 20:41, faust 1111 wrote:

> Why too documents?
> But i have one issue User
>  i need only one document .
> 
> i am interesting, how couch people do in real projects.
> when they need  two unique fields in document.
> 
> 
> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>> You need to have two documents with a unique ID each.
>> 
>> Cheers
>> Jan
>> --
>> 
>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>> 
>>> but what if i have two unique fields
>>>  login
>>>  email
>>> 
>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>> 
>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>> 
>>>>> Hi
>>>>> 
>>>>> In what way i cat implement validation of uniqueness?
>>>>> User
>>>>>  email: unique
>>>>>  login:  unique
>>>>> 
>>>> 
>>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>> 
>>>> {
>>>> "_id" : "user:unique",
>>>>   ...
>>>> }
>>>> 
>>>> Chris
>> 
>> 


Re: Validate uniqueness field

Posted by Daniel Itaboraí <it...@gmail.com>.
Thanks.

I'm such a noobie!

Daniel

On Mon, Mar 29, 2010 at 7:39 PM, Patrick Barnes <mr...@gmail.com> wrote:

> To nitpick slightly - you won't get a value of zero, you'll just find that
> there is no entry for the given key. ;-)
>
> -Patrick
>
> On 30/03/2010 3:06 AM, "Daniel Itaboraí" <it...@gmail.com> wrote:
>
> Distributed uniqueness is a hard problem, but since you intend to use it
> only on a single node, perhaps you should create a view for each set of
> fields that you intend to be unique in your documents.You would emit the
> unique combination of values as the key and the document id as the value.
> For the reduce function, you should just count the number of documents that
> hold that particular key.
>
> Prior to each PUT or POST on a document, you should query the view to make
> sure that no other document has used that specific combination of values
> (the reduced value should be 0). Also, after each PUT or POST, you´d have
> to
> query it again to see if the uniqueness still holds (the reduced value
> should be 1).
>
> If the probability of two or more writers issuing operations that might
> violate the uniqueness constraint is low enough, you might be able to get
> away with it. This, of course, has a race condition (aside from also being
> slow, ugly and discouraged). You would have to periodically handle
> uniqueness violations either automatically or manually.
>
> I think you should ask yourself first what´s the worst that could happen
> when a violation of a uniqueness constraint happens. Sometimes you can just
> work around it.
>
>
> regards,
> Daniel Itaboraí
>
>
> On Sun, Mar 28, 2010 at 12:41 AM, faust 1111 <fa...@gmail.com> wrote:
>
> > Why too documents?
> > Bu...
>

Re: Validate uniqueness field

Posted by Patrick Barnes <mr...@gmail.com>.
To nitpick slightly - you won't get a value of zero, you'll just find that
there is no entry for the given key. ;-)

-Patrick

On 30/03/2010 3:06 AM, "Daniel Itaboraí" <it...@gmail.com> wrote:

Distributed uniqueness is a hard problem, but since you intend to use it
only on a single node, perhaps you should create a view for each set of
fields that you intend to be unique in your documents.You would emit the
unique combination of values as the key and the document id as the value.
For the reduce function, you should just count the number of documents that
hold that particular key.

Prior to each PUT or POST on a document, you should query the view to make
sure that no other document has used that specific combination of values
(the reduced value should be 0). Also, after each PUT or POST, you´d have to
query it again to see if the uniqueness still holds (the reduced value
should be 1).

If the probability of two or more writers issuing operations that might
violate the uniqueness constraint is low enough, you might be able to get
away with it. This, of course, has a race condition (aside from also being
slow, ugly and discouraged). You would have to periodically handle
uniqueness violations either automatically or manually.

I think you should ask yourself first what´s the worst that could happen
when a violation of a uniqueness constraint happens. Sometimes you can just
work around it.


regards,
Daniel Itaboraí


On Sun, Mar 28, 2010 at 12:41 AM, faust 1111 <fa...@gmail.com> wrote:

> Why too documents?
> Bu...

Re: Validate uniqueness field

Posted by Alexander Uvarov <al...@gmail.com>.
For single master I have working example of locks implemented as pluggable http handler. It works similar to rewriter handler.

## Examples

Create a lock with 3 seconds lifetime:

POST http://localhost:5984/DB/_locks?scope=Person&timeout=3000

    201 - { "ok": true, "id": "3adaefg" }
    409 - { "ok": false, "message": "Already exist" }

## Create document

POST http://localhost:5984/DB/_locks/_db/?lock=3adaefg

    { "type": "Person", "name": "John Doe", "login": "John", "email": "john@example.net" }
    {"ok":true,"id":"5ca03037c83797a2457d13efba000c10","rev":"1-ac5e2cfb85fb3ddfc21d91334021b649"}

If lock does not exist you'll get:

    { "ok": false, "message": "Does not exist" } | { "ok": false, "message": "Expired" }

Note that lock will be released after each POST, PUT or DELETE request (any request is possible, but GET's does not make sense, just don't point requests to _locks handler).

Validate uniqueness:

  * Lock with scope "Person",
  * Ensure that there is no such person with fields intended to be unique (check email and login by using views),
  * Create document.

WTF? Convention is a key. You can use any token as scope. For example, if you want to update a Person, don't lock unless email or login changed.
One lock -- one operation. For multiple documents use Bulk API. Each create/update/delete will release the lock.
Each lock has a timeout, so you should not be slow, otherwise your update will be rejected. Define a number of scopes for application and follow conventions.

~300 LOC. I'll push to github if someone interested.

On 29.03.2010, at 22:06, Daniel Itaboraí wrote:

> Distributed uniqueness is a hard problem, but since you intend to use it
> only on a single node, perhaps you should create a view for each set of
> fields that you intend to be unique in your documents.You would emit the
> unique combination of values as the key and the document id as the value.
> For the reduce function, you should just count the number of documents that
> hold that particular key.
> 
> Prior to each PUT or POST on a document, you should query the view to make
> sure that no other document has used that specific combination of values
> (the reduced value should be 0). Also, after each PUT or POST, you´d have to
> query it again to see if the uniqueness still holds (the reduced value
> should be 1).
> 
> If the probability of two or more writers issuing operations that might
> violate the uniqueness constraint is low enough, you might be able to get
> away with it. This, of course, has a race condition (aside from also being
> slow, ugly and discouraged). You would have to periodically handle
> uniqueness violations either automatically or manually.
> 
> I think you should ask yourself first what´s the worst that could happen
> when a violation of a uniqueness constraint happens. Sometimes you can just
> work around it.
> 
> 
> regards,
> Daniel Itaboraí
> 
> On Sun, Mar 28, 2010 at 12:41 AM, faust 1111 <fa...@gmail.com> wrote:


Re: Validate uniqueness field

Posted by Daniel Itaboraí <it...@gmail.com>.
Distributed uniqueness is a hard problem, but since you intend to use it
only on a single node, perhaps you should create a view for each set of
fields that you intend to be unique in your documents.You would emit the
unique combination of values as the key and the document id as the value.
For the reduce function, you should just count the number of documents that
hold that particular key.

Prior to each PUT or POST on a document, you should query the view to make
sure that no other document has used that specific combination of values
(the reduced value should be 0). Also, after each PUT or POST, you´d have to
query it again to see if the uniqueness still holds (the reduced value
should be 1).

If the probability of two or more writers issuing operations that might
violate the uniqueness constraint is low enough, you might be able to get
away with it. This, of course, has a race condition (aside from also being
slow, ugly and discouraged). You would have to periodically handle
uniqueness violations either automatically or manually.

I think you should ask yourself first what´s the worst that could happen
when a violation of a uniqueness constraint happens. Sometimes you can just
work around it.


regards,
Daniel Itaboraí

On Sun, Mar 28, 2010 at 12:41 AM, faust 1111 <fa...@gmail.com> wrote:

> Why too documents?
> But i have one issue User
>  i need only one document .
>
> i am interesting, how couch people do in real projects.
> when they need  two unique fields in document.
>
>
> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
> > You need to have two documents with a unique ID each.
> >
> > Cheers
> > Jan
> > --
> >
> > On 27 Mar 2010, at 17:12, faust 1111 wrote:
> >
> >> but what if i have two unique fields
> >>  login
> >>  email
> >>
> >> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
> >>>
> >>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
> >>>
> >>>> Hi
> >>>>
> >>>> In what way i cat implement validation of uniqueness?
> >>>> User
> >>>>  email: unique
> >>>>  login:  unique
> >>>>
> >>>
> >>> You can only have 1 unique field per database. you implement it by
> using it as a docid, like
> >>>
> >>> {
> >>> "_id" : "user:unique",
> >>>   ...
> >>> }
> >>>
> >>> Chris
> >
> >
>

Re: Validate uniqueness field

Posted by faust 1111 <fa...@gmail.com>.
Why too documents?
But i have one issue User
  i need only one document .

i am interesting, how couch people do in real projects.
when they need  two unique fields in document.


2010/3/28 Jan Lehnardt <ja...@apache.org>:
> You need to have two documents with a unique ID each.
>
> Cheers
> Jan
> --
>
> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>
>> but what if i have two unique fields
>>  login
>>  email
>>
>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>
>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>
>>>> Hi
>>>>
>>>> In what way i cat implement validation of uniqueness?
>>>> User
>>>>  email: unique
>>>>  login:  unique
>>>>
>>>
>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>
>>> {
>>> "_id" : "user:unique",
>>>   ...
>>> }
>>>
>>> Chris
>
>

Re: Validate uniqueness field

Posted by Freddy Bowen <fr...@gmail.com>.
Yes, exactly.  In my app that's desirable.  I missed faust451's specific
use-case.

FB

On Tue, Mar 30, 2010 at 11:18 AM, Markus Jelsma <ma...@buyways.nl> wrote:

> but that would not ensure uniqueness in some scenario's:
>
> sha1(email_X + user_X) != sha1(email_X + user_Y)
>
> using, separate documents is currently the only feasible method to ensure
> this
> kind of uniqueness.

Re: Validate uniqueness field

Posted by Markus Jelsma <ma...@buyways.nl>.
but that would not ensure uniqueness in some scenario's:

sha1(email_X + user_X) != sha1(email_X + user_Y)

using, separate documents is currently the only feasible method to ensure this 
kind of uniqueness.



On Tuesday 30 March 2010 17:14:27 Freddy Bowen wrote:
> I use a canonical sha1 hash of a JSON object within the doc as the _id to
>  ensure uniqueness in my app.
> 
> FB
> 
> 
> On Tue, Mar 30, 2010 at 11:05 AM, Markus Jelsma <ma...@buyways.nl> wrote:
> Document ID's _must_ be a simple string. It would be nice though, to have
> complex ID's just as we can have complex keys for our views.
> 
> On Tuesday 30 March 2010 16:49:06 Andrew Melo wrote:
> > On Tue, Mar 30, 2010 at 9:44 AM, faust 1111 <fa...@gmail.com> wrote:
> > > Yes its only one way.
> > > Why couch don't implement uniqueness in simple way?
> >
> > It does implement it in a simple way. You get a unique field. Couch
> > makes sure that only one document has that field at a same time.
> >
> > Actually, I was just thinking, and someone else can correct me if I'm
> > wrong, but you may be able to do ['email','username'] as the _id.
> >
> > -Melo
> >
> > > 2010/3/30 Andrew Melo <an...@gmail.com>:
> > >> On Tue, Mar 30, 2010 at 9:13 AM, faust 1111 <fa...@gmail.com> wrote:
> > >>> It's very frightful for me implement uniqueness in this way
> > >>> create doc for each uniq field and keep it in actual state when i
> > >>> update docs.
> > >>>
> > >>> may be better check uniqueness only in application layer?
> > >>> now i don't think about distribute.
> > >>
> > >> If you don't do it the way Jan suggested, you may end up with a race
> > >> condition, even if you don't distribute the database (i.e. if two
> > >> people register (nearly) simultaneously with the same username/email).
> > >> You would then have to support backing out/correcting any errors
> > >> manually in your application level code. If you're comfortable with
> > >> that, feel free, but Jan's way is much cleaner, in the long run.
> > >>
> > >> best,
> > >> Andrew
> > >>
> > >>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
> > >>>> You need to have two documents with a unique ID each.
> > >>>>
> > >>>> Cheers
> > >>>> Jan
> > >>>> --
> > >>>>
> > >>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
> > >>>>> but what if i have two unique fields
> > >>>>>  login
> > >>>>>  email
> > >>>>>
> > >>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
> > >>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
> > >>>>>>> Hi
> > >>>>>>>
> > >>>>>>> In what way i cat implement validation of uniqueness?
> > >>>>>>> User
> > >>>>>>>  email: unique
> > >>>>>>>  login:  unique
> > >>>>>>
> > >>>>>> You can only have 1 unique field per database. you implement it by
> > >>>>>> using it as a docid, like
> > >>>>>>
> > >>>>>> {
> > >>>>>> "_id" : "user:unique",
> > >>>>>>   ...
> > >>>>>> }
> > >>>>>>
> > >>>>>> Chris
> > >>
> > >> --
> > >> --
> > >> Andrew Melo
> 
> Markus Jelsma - Technisch Architect - Buyways BV
> http://www.linkedin.com/in/markus17
> 050-8536620 / 06-50258350
> 

Markus Jelsma - Technisch Architect - Buyways BV
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


Re: Validate uniqueness field

Posted by Markus Jelsma <ma...@buyways.nl>.
Perhaps i sent the e-mail a bit too fast because issues would surface if couch 
allows complex ID's.

For example, would the entire value of the key be considered to be unique, or 
would the individual elements? And what about the order of the values inside 
the key?


["user_1", "email_1"]  vs.   ["email_1", "user_1"]

or

["user_1", "email_2"]  vs.   ["email_2", "user_1"]


The problem is that this the answer to this question, i believe,  depends on 
your usecase. Also, the current source might not that easily be patched for 
such a feature, if it would be a good idea at all to support such a feature.




On Tuesday 30 March 2010 17:05:15 Markus Jelsma wrote:
> Document ID's _must_ be a simple string. It would be nice though, to have
> complex ID's just as we can have complex keys for our views.
> 
> On Tuesday 30 March 2010 16:49:06 Andrew Melo wrote:
> > On Tue, Mar 30, 2010 at 9:44 AM, faust 1111 <fa...@gmail.com> wrote:
> > > Yes its only one way.
> > > Why couch don't implement uniqueness in simple way?
> >
> > It does implement it in a simple way. You get a unique field. Couch
> > makes sure that only one document has that field at a same time.
> >
> > Actually, I was just thinking, and someone else can correct me if I'm
> > wrong, but you may be able to do ['email','username'] as the _id.
> >
> > -Melo
> >
> > > 2010/3/30 Andrew Melo <an...@gmail.com>:
> > >> On Tue, Mar 30, 2010 at 9:13 AM, faust 1111 <fa...@gmail.com> wrote:
> > >>> It's very frightful for me implement uniqueness in this way
> > >>> create doc for each uniq field and keep it in actual state when i
> > >>> update docs.
> > >>>
> > >>> may be better check uniqueness only in application layer?
> > >>> now i don't think about distribute.
> > >>
> > >> If you don't do it the way Jan suggested, you may end up with a race
> > >> condition, even if you don't distribute the database (i.e. if two
> > >> people register (nearly) simultaneously with the same username/email).
> > >> You would then have to support backing out/correcting any errors
> > >> manually in your application level code. If you're comfortable with
> > >> that, feel free, but Jan's way is much cleaner, in the long run.
> > >>
> > >> best,
> > >> Andrew
> > >>
> > >>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
> > >>>> You need to have two documents with a unique ID each.
> > >>>>
> > >>>> Cheers
> > >>>> Jan
> > >>>> --
> > >>>>
> > >>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
> > >>>>> but what if i have two unique fields
> > >>>>>  login
> > >>>>>  email
> > >>>>>
> > >>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
> > >>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
> > >>>>>>> Hi
> > >>>>>>>
> > >>>>>>> In what way i cat implement validation of uniqueness?
> > >>>>>>> User
> > >>>>>>>  email: unique
> > >>>>>>>  login:  unique
> > >>>>>>
> > >>>>>> You can only have 1 unique field per database. you implement it by
> > >>>>>> using it as a docid, like
> > >>>>>>
> > >>>>>> {
> > >>>>>> "_id" : "user:unique",
> > >>>>>>   ...
> > >>>>>> }
> > >>>>>>
> > >>>>>> Chris
> > >>
> > >> --
> > >> --
> > >> Andrew Melo
> 
> Markus Jelsma - Technisch Architect - Buyways BV
> http://www.linkedin.com/in/markus17
> 050-8536620 / 06-50258350
> 

Markus Jelsma - Technisch Architect - Buyways BV
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


Re: Validate uniqueness field

Posted by Freddy Bowen <fr...@gmail.com>.
I use a canonical sha1 hash of a JSON object within the doc as the _id to
ensure uniqueness in my app.

FB


On Tue, Mar 30, 2010 at 11:05 AM, Markus Jelsma <ma...@buyways.nl> wrote:

> Document ID's _must_ be a simple string. It would be nice though, to have
> complex ID's just as we can have complex keys for our views.
>
>
> On Tuesday 30 March 2010 16:49:06 Andrew Melo wrote:
> > On Tue, Mar 30, 2010 at 9:44 AM, faust 1111 <fa...@gmail.com> wrote:
> > > Yes its only one way.
> > > Why couch don't implement uniqueness in simple way?
> >
> > It does implement it in a simple way. You get a unique field. Couch
> > makes sure that only one document has that field at a same time.
> >
> > Actually, I was just thinking, and someone else can correct me if I'm
> > wrong, but you may be able to do ['email','username'] as the _id.
> >
> > -Melo
> >
> > > 2010/3/30 Andrew Melo <an...@gmail.com>:
> > >> On Tue, Mar 30, 2010 at 9:13 AM, faust 1111 <fa...@gmail.com>
> wrote:
> > >>> It's very frightful for me implement uniqueness in this way
> > >>> create doc for each uniq field and keep it in actual state when i
> > >>> update docs.
> > >>>
> > >>> may be better check uniqueness only in application layer?
> > >>> now i don't think about distribute.
> > >>
> > >> If you don't do it the way Jan suggested, you may end up with a race
> > >> condition, even if you don't distribute the database (i.e. if two
> > >> people register (nearly) simultaneously with the same username/email).
> > >> You would then have to support backing out/correcting any errors
> > >> manually in your application level code. If you're comfortable with
> > >> that, feel free, but Jan's way is much cleaner, in the long run.
> > >>
> > >> best,
> > >> Andrew
> > >>
> > >>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
> > >>>> You need to have two documents with a unique ID each.
> > >>>>
> > >>>> Cheers
> > >>>> Jan
> > >>>> --
> > >>>>
> > >>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
> > >>>>> but what if i have two unique fields
> > >>>>>  login
> > >>>>>  email
> > >>>>>
> > >>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
> > >>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
> > >>>>>>> Hi
> > >>>>>>>
> > >>>>>>> In what way i cat implement validation of uniqueness?
> > >>>>>>> User
> > >>>>>>>  email: unique
> > >>>>>>>  login:  unique
> > >>>>>>
> > >>>>>> You can only have 1 unique field per database. you implement it by
> > >>>>>> using it as a docid, like
> > >>>>>>
> > >>>>>> {
> > >>>>>> "_id" : "user:unique",
> > >>>>>>   ...
> > >>>>>> }
> > >>>>>>
> > >>>>>> Chris
> > >>
> > >> --
> > >> --
> > >> Andrew Melo
> >
>
> Markus Jelsma - Technisch Architect - Buyways BV
> http://www.linkedin.com/in/markus17
> 050-8536620 / 06-50258350
>
>

Re: Validate uniqueness field

Posted by Markus Jelsma <ma...@buyways.nl>.
Document ID's _must_ be a simple string. It would be nice though, to have 
complex ID's just as we can have complex keys for our views.


On Tuesday 30 March 2010 16:49:06 Andrew Melo wrote:
> On Tue, Mar 30, 2010 at 9:44 AM, faust 1111 <fa...@gmail.com> wrote:
> > Yes its only one way.
> > Why couch don't implement uniqueness in simple way?
> 
> It does implement it in a simple way. You get a unique field. Couch
> makes sure that only one document has that field at a same time.
> 
> Actually, I was just thinking, and someone else can correct me if I'm
> wrong, but you may be able to do ['email','username'] as the _id.
> 
> -Melo
> 
> > 2010/3/30 Andrew Melo <an...@gmail.com>:
> >> On Tue, Mar 30, 2010 at 9:13 AM, faust 1111 <fa...@gmail.com> wrote:
> >>> It's very frightful for me implement uniqueness in this way
> >>> create doc for each uniq field and keep it in actual state when i
> >>> update docs.
> >>>
> >>> may be better check uniqueness only in application layer?
> >>> now i don't think about distribute.
> >>
> >> If you don't do it the way Jan suggested, you may end up with a race
> >> condition, even if you don't distribute the database (i.e. if two
> >> people register (nearly) simultaneously with the same username/email).
> >> You would then have to support backing out/correcting any errors
> >> manually in your application level code. If you're comfortable with
> >> that, feel free, but Jan's way is much cleaner, in the long run.
> >>
> >> best,
> >> Andrew
> >>
> >>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
> >>>> You need to have two documents with a unique ID each.
> >>>>
> >>>> Cheers
> >>>> Jan
> >>>> --
> >>>>
> >>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
> >>>>> but what if i have two unique fields
> >>>>>  login
> >>>>>  email
> >>>>>
> >>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
> >>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
> >>>>>>> Hi
> >>>>>>>
> >>>>>>> In what way i cat implement validation of uniqueness?
> >>>>>>> User
> >>>>>>>  email: unique
> >>>>>>>  login:  unique
> >>>>>>
> >>>>>> You can only have 1 unique field per database. you implement it by
> >>>>>> using it as a docid, like
> >>>>>>
> >>>>>> {
> >>>>>> "_id" : "user:unique",
> >>>>>>   ...
> >>>>>> }
> >>>>>>
> >>>>>> Chris
> >>
> >> --
> >> --
> >> Andrew Melo
> 

Markus Jelsma - Technisch Architect - Buyways BV
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


Re: Validate uniqueness field

Posted by Alexander Uvarov <al...@gmail.com>.
On 30.03.2010, at 20:49, Andrew Melo wrote:

> On Tue, Mar 30, 2010 at 9:44 AM, faust 1111 <fa...@gmail.com> wrote:
>> Yes its only one way.
>> Why couch don't implement uniqueness in simple way?
> 
> It does implement it in a simple way. You get a unique field. Couch
> makes sure that only one document has that field at a same time.
> 
> Actually, I was just thinking, and someone else can correct me if I'm
> wrong, but you may be able to do ['email','username'] as the _id.
> 
> -Melo
> 

How to make composite keys -- unique login and email:
["login": "john", "email": "doe@example.net"]
["login": "bob", "email": "doe@example.net"]
Both keys are different, but emails are not unique.

How to update value (email or login changed)? Just impossible.

Re: Validate uniqueness field

Posted by Simon Metson <si...@googlemail.com>.
Hi,

> Actually, I was just thinking, and someone else can correct me if I'm
> wrong, but you may be able to do ['email','username'] as the _id.

Even if you can't you can hash/concatenate the two to make a unique  
string. You could then store the two fields in the doc (for easy  
access in views) and have a validation function to stop people editing  
them. We do something similar (though the id is made of run, lumi and  
md5(dataset), and we're yet to add the validation function), it works  
well for our problem.
Cheers
Simon

Re: Validate uniqueness field

Posted by Andrew Melo <an...@gmail.com>.
On Tue, Mar 30, 2010 at 9:44 AM, faust 1111 <fa...@gmail.com> wrote:
> Yes its only one way.
> Why couch don't implement uniqueness in simple way?

It does implement it in a simple way. You get a unique field. Couch
makes sure that only one document has that field at a same time.

Actually, I was just thinking, and someone else can correct me if I'm
wrong, but you may be able to do ['email','username'] as the _id.

-Melo

> 2010/3/30 Andrew Melo <an...@gmail.com>:
>> On Tue, Mar 30, 2010 at 9:13 AM, faust 1111 <fa...@gmail.com> wrote:
>>> It's very frightful for me implement uniqueness in this way
>>> create doc for each uniq field and keep it in actual state when i update docs.
>>>
>>> may be better check uniqueness only in application layer?
>>> now i don't think about distribute.
>>
>> If you don't do it the way Jan suggested, you may end up with a race
>> condition, even if you don't distribute the database (i.e. if two
>> people register (nearly) simultaneously with the same username/email).
>> You would then have to support backing out/correcting any errors
>> manually in your application level code. If you're comfortable with
>> that, feel free, but Jan's way is much cleaner, in the long run.
>>
>> best,
>> Andrew
>>
>>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>>>> You need to have two documents with a unique ID each.
>>>>
>>>> Cheers
>>>> Jan
>>>> --
>>>>
>>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>>>>
>>>>> but what if i have two unique fields
>>>>>  login
>>>>>  email
>>>>>
>>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>>>>
>>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> In what way i cat implement validation of uniqueness?
>>>>>>> User
>>>>>>>  email: unique
>>>>>>>  login:  unique
>>>>>>>
>>>>>>
>>>>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>>>>
>>>>>> {
>>>>>> "_id" : "user:unique",
>>>>>>   ...
>>>>>> }
>>>>>>
>>>>>> Chris
>>>>
>>>>
>>>
>>
>>
>>
>> --
>> --
>> Andrew Melo
>>
>



-- 
--
Andrew Melo

Re: Validate uniqueness field

Posted by faust 1111 <fa...@gmail.com>.
Yes its only one way.
Why couch don't implement uniqueness in simple way?

2010/3/30 Andrew Melo <an...@gmail.com>:
> On Tue, Mar 30, 2010 at 9:13 AM, faust 1111 <fa...@gmail.com> wrote:
>> It's very frightful for me implement uniqueness in this way
>> create doc for each uniq field and keep it in actual state when i update docs.
>>
>> may be better check uniqueness only in application layer?
>> now i don't think about distribute.
>
> If you don't do it the way Jan suggested, you may end up with a race
> condition, even if you don't distribute the database (i.e. if two
> people register (nearly) simultaneously with the same username/email).
> You would then have to support backing out/correcting any errors
> manually in your application level code. If you're comfortable with
> that, feel free, but Jan's way is much cleaner, in the long run.
>
> best,
> Andrew
>
>> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>>> You need to have two documents with a unique ID each.
>>>
>>> Cheers
>>> Jan
>>> --
>>>
>>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>>>
>>>> but what if i have two unique fields
>>>>  login
>>>>  email
>>>>
>>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>>>
>>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> In what way i cat implement validation of uniqueness?
>>>>>> User
>>>>>>  email: unique
>>>>>>  login:  unique
>>>>>>
>>>>>
>>>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>>>
>>>>> {
>>>>> "_id" : "user:unique",
>>>>>   ...
>>>>> }
>>>>>
>>>>> Chris
>>>
>>>
>>
>
>
>
> --
> --
> Andrew Melo
>

Re: Validate uniqueness field

Posted by Andrew Melo <an...@gmail.com>.
On Tue, Mar 30, 2010 at 9:13 AM, faust 1111 <fa...@gmail.com> wrote:
> It's very frightful for me implement uniqueness in this way
> create doc for each uniq field and keep it in actual state when i update docs.
>
> may be better check uniqueness only in application layer?
> now i don't think about distribute.

If you don't do it the way Jan suggested, you may end up with a race
condition, even if you don't distribute the database (i.e. if two
people register (nearly) simultaneously with the same username/email).
You would then have to support backing out/correcting any errors
manually in your application level code. If you're comfortable with
that, feel free, but Jan's way is much cleaner, in the long run.

best,
Andrew

> 2010/3/28 Jan Lehnardt <ja...@apache.org>:
>> You need to have two documents with a unique ID each.
>>
>> Cheers
>> Jan
>> --
>>
>> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>>
>>> but what if i have two unique fields
>>>  login
>>>  email
>>>
>>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>>
>>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> In what way i cat implement validation of uniqueness?
>>>>> User
>>>>>  email: unique
>>>>>  login:  unique
>>>>>
>>>>
>>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>>
>>>> {
>>>> "_id" : "user:unique",
>>>>   ...
>>>> }
>>>>
>>>> Chris
>>
>>
>



-- 
--
Andrew Melo

Re: Validate uniqueness field

Posted by faust 1111 <fa...@gmail.com>.
It's very frightful for me implement uniqueness in this way
create doc for each uniq field and keep it in actual state when i update docs.

may be better check uniqueness only in application layer?
now i don't think about distribute.

2010/3/28 Jan Lehnardt <ja...@apache.org>:
> You need to have two documents with a unique ID each.
>
> Cheers
> Jan
> --
>
> On 27 Mar 2010, at 17:12, faust 1111 wrote:
>
>> but what if i have two unique fields
>>  login
>>  email
>>
>> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>>>
>>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>>>
>>>> Hi
>>>>
>>>> In what way i cat implement validation of uniqueness?
>>>> User
>>>>  email: unique
>>>>  login:  unique
>>>>
>>>
>>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>>>
>>> {
>>> "_id" : "user:unique",
>>>   ...
>>> }
>>>
>>> Chris
>
>

Re: Validate uniqueness field

Posted by Jan Lehnardt <ja...@apache.org>.
You need to have two documents with a unique ID each.

Cheers
Jan
--

On 27 Mar 2010, at 17:12, faust 1111 wrote:

> but what if i have two unique fields
>  login
>  email
> 
> 2010/3/28 J Chris Anderson <jc...@gmail.com>:
>> 
>> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>> 
>>> Hi
>>> 
>>> In what way i cat implement validation of uniqueness?
>>> User
>>>  email: unique
>>>  login:  unique
>>> 
>> 
>> You can only have 1 unique field per database. you implement it by using it as a docid, like
>> 
>> {
>> "_id" : "user:unique",
>>   ...
>> }
>> 
>> Chris


Re: Validate uniqueness field

Posted by faust 1111 <fa...@gmail.com>.
but what if i have two unique fields
  login
  email

2010/3/28 J Chris Anderson <jc...@gmail.com>:
>
> On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:
>
>> Hi
>>
>> In what way i cat implement validation of uniqueness?
>> User
>>  email: unique
>>  login:  unique
>>
>
> You can only have 1 unique field per database. you implement it by using it as a docid, like
>
> {
> "_id" : "user:unique",
>   ...
> }
>
> Chris

Re: Validate uniqueness field

Posted by J Chris Anderson <jc...@gmail.com>.
On Mar 27, 2010, at 4:56 PM, faust 1111 wrote:

> Hi
> 
> In what way i cat implement validation of uniqueness?
> User
>  email: unique
>  login:  unique
> 

You can only have 1 unique field per database. you implement it by using it as a docid, like

{
"_id" : "user:unique",
   ...
}

Chris