You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by Mike Thomsen <mi...@gmail.com> on 2017/08/04 15:56:30 UTC

Interested in adding two new features to GetMongo

1. Add the ability to run aggregations to GetMongo.
2. Add the ability to get the query from a flowfile.

I know you can't do #2 right now, but was wondering if there was an
existing way to invoke the aggregation pipeline that I'm missing before I
dive into that.

Thanks,

Mike

Re: Interested in adding two new features to GetMongo

Posted by Mike Thomsen <mi...@gmail.com>.
We've done a lot with Solr and ElasticSearch, so that was the impetus
behind that change. It seemed like a real waste to have to throw another
processor in there just to make sane date fields (and others in some cases)
that those processors could handle.

On Fri, Aug 11, 2017 at 8:20 AM, Mike Thomsen <mi...@gmail.com>
wrote:

> I added this new feature lately based on some needs that came up with a
> client:
>
> https://github.com/apache/nifi/pull/2063
>
> It uses Jackson to serialize the Mongo document instead of the Mongo
> Document class's toJson() which produces "extended JSON." From what I could
> tell with our client's data and other testing, it works just fine.
> Difference is mainly like this:
>
> Default:
>
> {
>     "someTimeStamp": {
>         "$date": 123456....
>     }
> }
>
> If you select "clean JSON:"
>
> {
>     "someTimeStamp": "2017-08-11T08:18:15Z"
> }
>
> It should do the same for doubles and longs which have their own weird
> representation that makes cleanup necessary for any other JSON-based
> processor.
>
> On Thu, Aug 10, 2017 at 10:01 PM, Joe Witt <jo...@gmail.com> wrote:
>
>> Team,
>>
>> Is there anyone else familiar with Mongo that could discuss this with
>> Mike?
>>
>> Thanks
>> Joe
>>
>> On Fri, Aug 4, 2017 at 8:56 AM, Mike Thomsen <mi...@gmail.com>
>> wrote:
>> > 1. Add the ability to run aggregations to GetMongo.
>> > 2. Add the ability to get the query from a flowfile.
>> >
>> > I know you can't do #2 right now, but was wondering if there was an
>> > existing way to invoke the aggregation pipeline that I'm missing before
>> I
>> > dive into that.
>> >
>> > Thanks,
>> >
>> > Mike
>>
>
>

Re: Interested in adding two new features to GetMongo

Posted by Mike Thomsen <mi...@gmail.com>.
I added this new feature lately based on some needs that came up with a
client:

https://github.com/apache/nifi/pull/2063

It uses Jackson to serialize the Mongo document instead of the Mongo
Document class's toJson() which produces "extended JSON." From what I could
tell with our client's data and other testing, it works just fine.
Difference is mainly like this:

Default:

{
    "someTimeStamp": {
        "$date": 123456....
    }
}

If you select "clean JSON:"

{
    "someTimeStamp": "2017-08-11T08:18:15Z"
}

It should do the same for doubles and longs which have their own weird
representation that makes cleanup necessary for any other JSON-based
processor.

On Thu, Aug 10, 2017 at 10:01 PM, Joe Witt <jo...@gmail.com> wrote:

> Team,
>
> Is there anyone else familiar with Mongo that could discuss this with Mike?
>
> Thanks
> Joe
>
> On Fri, Aug 4, 2017 at 8:56 AM, Mike Thomsen <mi...@gmail.com>
> wrote:
> > 1. Add the ability to run aggregations to GetMongo.
> > 2. Add the ability to get the query from a flowfile.
> >
> > I know you can't do #2 right now, but was wondering if there was an
> > existing way to invoke the aggregation pipeline that I'm missing before I
> > dive into that.
> >
> > Thanks,
> >
> > Mike
>

Re: Interested in adding two new features to GetMongo

Posted by Joe Witt <jo...@gmail.com>.
Team,

Is there anyone else familiar with Mongo that could discuss this with Mike?

Thanks
Joe

On Fri, Aug 4, 2017 at 8:56 AM, Mike Thomsen <mi...@gmail.com> wrote:
> 1. Add the ability to run aggregations to GetMongo.
> 2. Add the ability to get the query from a flowfile.
>
> I know you can't do #2 right now, but was wondering if there was an
> existing way to invoke the aggregation pipeline that I'm missing before I
> dive into that.
>
> Thanks,
>
> Mike