You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/29 08:46:21 UTC

[GitHub] [spark] beliefer opened a new pull request, #37704: [WIP][Do not merge] Support Decimal128Type with Int128

beliefer opened a new pull request, #37704:
URL: https://github.com/apache/spark/pull/37704

   ### What changes were proposed in this pull request?
   Extend Catalyst's type system by a new decimal type: Decimal128Type with Int128.
   
   
   ### Why are the changes needed?
   Spark SQL today supports the Decimal data type. The implementation of Spark Decimal holds a BigDecimal or Long value. Spark Decimal provides some operators like +, -, *, /, % and so on. These operators rely heavily on the computational power of BigDecimal or Long itself. For ease of understanding, take the + as an example. The implementation shows below.
   ```
     def + (that: Decimal): Decimal = {
       if (decimalVal.eq(null) && that.decimalVal.eq(null) && scale == that.scale) {
         Decimal(longVal + that.longVal, Math.max(precision, that.precision) + 1, scale)
       } else {
         Decimal(toBigDecimal.bigDecimal.add(that.toBigDecimal.bigDecimal))
       }
     }
   ```
   We can see the + of Long will be called if Spark Decimal holds a Long value. Otherwise,  the add of BigDecimal will be called if Spark Decimal holds a BigDecimal value. The other operators of Spark Decimal adopt the similar way. Furthermore, the code shown above calls Decimal.apply to construct a new instance of Spark Decimal.  As we know, the add operator of BigDecimal constructed a new instance of BigDecimal. So, if we call the + operator of Spark Decimal who holds a Long value, Spark will construct a new instance of Decimal. Otherwise, Spark will construct a new instance of BigDecimal and a new instance of Decimal.
   Through rough analysis, we know:
   1. The computational power of Spark Decimal may depend on BigDecimal.
   2. The calculation operators of Spark Decimal create a lot of new instances of Decimal and may create a lot of new instances of BigDecimal.
   If a large table has a field called 'colA whose type is Spark Decimal, the execution of SUM('colA) will involve the creation of a large number of Spark Decimal instances and BigDecimal instances. These Spark Decimal instances and BigDecimal instances will lead to garbage collection frequently.
   Int128 is a high-performance data type about 1X~6X more efficient than BigDecimal for typical operations. It uses a finite (128 bit) precision and can handle up to decimal(38, X). The implementation of Int128 just uses two Long values to represent the high and low bits of 128 bits respectively. Int128 is lighter than BigDecimal, reduces the cost of new() and garbage collection.
   If Spark could support a new type to represent the decimal type with Int128, it would improve the performance for calculation operators.
   Now, let's call the new decimal type Decimal128.
   
   
   ### Does this PR introduce _any_ user-facing change?
   No, a new decimal type with Int128. It is still in development.
   
   
   ### How was this patch tested?
   New test cases.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] github-actions[bot] commented on pull request #37704: [WIP][Do not merge] Support Decimal128Type with Int128

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on PR #37704:
URL: https://github.com/apache/spark/pull/37704#issuecomment-1345416445

   We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
   If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] github-actions[bot] closed pull request #37704: [WIP][Do not merge] Support Decimal128Type with Int128

Posted by GitBox <gi...@apache.org>.
github-actions[bot] closed pull request #37704: [WIP][Do not merge] Support Decimal128Type with Int128
URL: https://github.com/apache/spark/pull/37704


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org