You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by liyuj <18...@163.com> on 2019/09/18 14:46:59 UTC

Re: [SparkDataFrame] Query Optimization. Prototype

Hi,

Can anyone explain how difficult it is to implement ROWNUM?

This is a very common requirement.

在 2018/1/23 下午4:05, Serge Puchnin 写道:
> yes, the Cust function is supporting both Ignite and H2.
>
> I've updated the documentation for next system functions:
> CASEWHEN Function, CAST, CONVERT, TABLE
>
> https://apacheignite-sql.readme.io/docs/system-functions
>
> And for my mind, next functions aren't applicable for Ignite:
> ARRAY_GET, ARRAY_LENGTH, ARRAY_CONTAINS, CSVREAD, CSVWRITE, DATABASE,
> DATABASE_PATH, DISK_SPACE_USED, FILE_READ, FILE_WRITE, LINK_SCHEMA,
> MEMORY_FREE, MEMORY_USED, LOCK_MODE, LOCK_TIMEOUT, READONLY, CURRVAL,
> AUTOCOMMIT, CANCEL_SESSION, IDENTITY, NEXTVAL, ROWNUM, SCHEMA,
> SCOPE_IDENTITY, SESSION_ID, SET, TRANSACTION_ID, TRUNCATE_VALUE, USER,
> H2VERSION
>
> Also an issue was created for review current documentation:
> https://issues.apache.org/jira/browse/IGNITE-7496
>
> --
> BR,
> Serge
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: [SparkDataFrame] Query Optimization. Prototype

Posted by Nikolay Izhikov <ni...@apache.org>.
Hello, liyuj

> I want to know why

I suppose we are talking about Ignite SQL engine, not about Spark integration module(as it written in the message topic)

The simple answer - because, it is not implemented.

You can file a ticket and we wil see what we can do.
You can contribute the support of ROWNUM if you want.


В Чт, 19/09/2019 в 08:20 +0800, liyuj пишет:
> Hi Nikolay,
> 
> Because in the discussion below, there is a list describing that ROWNUM 
> is not applicable to Ignite, and I want to know why.
> 
> 在 2019/9/18 下午11:14, Nikolay Izhikov 写道:
> > Hello, liyuj.
> > 
> > Please, clarify.
> > 
> > Do you want to contribute this to Ignite?
> > What explanation do you expect?
> > 
> > В Ср, 18/09/2019 в 22:46 +0800, liyuj пишет:
> > > Hi,
> > > 
> > > Can anyone explain how difficult it is to implement ROWNUM?
> > > 
> > > This is a very common requirement.
> > > 
> > > 在 2018/1/23 下午4:05, Serge Puchnin 写道:
> > > > yes, the Cust function is supporting both Ignite and H2.
> > > > 
> > > > I've updated the documentation for next system functions:
> > > > CASEWHEN Function, CAST, CONVERT, TABLE
> > > > 
> > > > https://apacheignite-sql.readme.io/docs/system-functions
> > > > 
> > > > And for my mind, next functions aren't applicable for Ignite:
> > > > ARRAY_GET, ARRAY_LENGTH, ARRAY_CONTAINS, CSVREAD, CSVWRITE, DATABASE,
> > > > DATABASE_PATH, DISK_SPACE_USED, FILE_READ, FILE_WRITE, LINK_SCHEMA,
> > > > MEMORY_FREE, MEMORY_USED, LOCK_MODE, LOCK_TIMEOUT, READONLY, CURRVAL,
> > > > AUTOCOMMIT, CANCEL_SESSION, IDENTITY, NEXTVAL, ROWNUM, SCHEMA,
> > > > SCOPE_IDENTITY, SESSION_ID, SET, TRANSACTION_ID, TRUNCATE_VALUE, USER,
> > > > H2VERSION
> > > > 
> > > > Also an issue was created for review current documentation:
> > > > https://issues.apache.org/jira/browse/IGNITE-7496
> > > > 
> > > > --
> > > > BR,
> > > > Serge
> > > > 
> > > > 
> > > > 
> > > > --
> > > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> 
> 

Re: [SparkDataFrame] Query Optimization. Prototype

Posted by liyuj <18...@163.com>.
Hi Nikolay,

Because in the discussion below, there is a list describing that ROWNUM 
is not applicable to Ignite, and I want to know why.

在 2019/9/18 下午11:14, Nikolay Izhikov 写道:
> Hello, liyuj.
>
> Please, clarify.
>
> Do you want to contribute this to Ignite?
> What explanation do you expect?
>
> В Ср, 18/09/2019 в 22:46 +0800, liyuj пишет:
>> Hi,
>>
>> Can anyone explain how difficult it is to implement ROWNUM?
>>
>> This is a very common requirement.
>>
>> 在 2018/1/23 下午4:05, Serge Puchnin 写道:
>>> yes, the Cust function is supporting both Ignite and H2.
>>>
>>> I've updated the documentation for next system functions:
>>> CASEWHEN Function, CAST, CONVERT, TABLE
>>>
>>> https://apacheignite-sql.readme.io/docs/system-functions
>>>
>>> And for my mind, next functions aren't applicable for Ignite:
>>> ARRAY_GET, ARRAY_LENGTH, ARRAY_CONTAINS, CSVREAD, CSVWRITE, DATABASE,
>>> DATABASE_PATH, DISK_SPACE_USED, FILE_READ, FILE_WRITE, LINK_SCHEMA,
>>> MEMORY_FREE, MEMORY_USED, LOCK_MODE, LOCK_TIMEOUT, READONLY, CURRVAL,
>>> AUTOCOMMIT, CANCEL_SESSION, IDENTITY, NEXTVAL, ROWNUM, SCHEMA,
>>> SCOPE_IDENTITY, SESSION_ID, SET, TRANSACTION_ID, TRUNCATE_VALUE, USER,
>>> H2VERSION
>>>
>>> Also an issue was created for review current documentation:
>>> https://issues.apache.org/jira/browse/IGNITE-7496
>>>
>>> --
>>> BR,
>>> Serge
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>>


Re: [SparkDataFrame] Query Optimization. Prototype

Posted by Nikolay Izhikov <ni...@apache.org>.
Hello, liyuj.

Please, clarify.

Do you want to contribute this to Ignite?
What explanation do you expect?

В Ср, 18/09/2019 в 22:46 +0800, liyuj пишет:
> Hi,
> 
> Can anyone explain how difficult it is to implement ROWNUM?
> 
> This is a very common requirement.
> 
> 在 2018/1/23 下午4:05, Serge Puchnin 写道:
> > yes, the Cust function is supporting both Ignite and H2.
> > 
> > I've updated the documentation for next system functions:
> > CASEWHEN Function, CAST, CONVERT, TABLE
> > 
> > https://apacheignite-sql.readme.io/docs/system-functions
> > 
> > And for my mind, next functions aren't applicable for Ignite:
> > ARRAY_GET, ARRAY_LENGTH, ARRAY_CONTAINS, CSVREAD, CSVWRITE, DATABASE,
> > DATABASE_PATH, DISK_SPACE_USED, FILE_READ, FILE_WRITE, LINK_SCHEMA,
> > MEMORY_FREE, MEMORY_USED, LOCK_MODE, LOCK_TIMEOUT, READONLY, CURRVAL,
> > AUTOCOMMIT, CANCEL_SESSION, IDENTITY, NEXTVAL, ROWNUM, SCHEMA,
> > SCOPE_IDENTITY, SESSION_ID, SET, TRANSACTION_ID, TRUNCATE_VALUE, USER,
> > H2VERSION
> > 
> > Also an issue was created for review current documentation:
> > https://issues.apache.org/jira/browse/IGNITE-7496
> > 
> > --
> > BR,
> > Serge
> > 
> > 
> > 
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> 
>