You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@calcite.apache.org by "yanjing.wang (Jira)" <ji...@apache.org> on 2021/01/12 02:24:00 UTC

[jira] [Created] (CALCITE-4463) why doesn't dialect.unparseOffsetFetch method apply to SqlOrderBy tree node

yanjing.wang created CALCITE-4463:
-------------------------------------

             Summary: why doesn't dialect.unparseOffsetFetch method apply to SqlOrderBy tree node
                 Key: CALCITE-4463
                 URL: https://issues.apache.org/jira/browse/CALCITE-4463
             Project: Calcite
          Issue Type: Bug
          Components: core
    Affects Versions: 1.26.0
         Environment: jvm: open-jdk8

 

calcite: 1.26.0
            Reporter: yanjing.wang
         Attachments: image-2021-01-12-10-06-29-813.png

in SqlOrderBy$Operator class, the unparse method code offset and fetch hardly and close the door to transform my sql to limit x offset y style.

why doesn't it invoke dialect.unparseOffsetFetch like SqlSelectOperator?

 

!image-2021-01-12-10-06-29-813.png!

 

my unparse sql code:
{code:java}
String sql = "select concat(a.id,'-',b.id) , a.name from xxx.bb limit 5";

SqlDialect SPARK = new
        SparkSqlDialect(SqlDialect.EMPTY_CONTEXT
        .withDatabaseProduct(SqlDialect.DatabaseProduct.SPARK)
        .withIdentifierQuoteString("`").withNullCollation(NullCollation.LOW)
        .withLiteralQuoteString("'").withLiteralEscapedQuoteString("''")
        .withUnquotedCasing(Casing.UNCHANGED).withQuotedCasing(Casing.UNCHANGED));

SqlParser.Config configBuilder =
        SqlParser.config()
                .withParserFactory(SqlBabelParserImpl.FACTORY)
                .withConformance(SqlConformanceEnum.LENIENT);

SqlParser sqlParser = SqlParser.create(sql, configBuilder);
try {
    SqlNode sqlNode = sqlParser.parseQuery();
    SqlString sqlString = sqlNode.toSqlString(SPARK);
    System.out.println(sqlString);
} catch (SqlParseException e) {
    e.printStackTrace();
}{code}
result:
{code:java}
SELECT `CONCAT`(`A`.`ID`, '-', `B`.`ID`), `A`.`NAME` FROM `XXX`.`BB` FETCH NEXT 5 ROWS ONLY
{code}
what should i do if i want transform some other dialect sql to spark, because spark doesn't support "FETCH NEXT 5000 ROWS ONLY"

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)