You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by tigerquoll <gi...@git.apache.org> on 2018/09/04 22:55:20 UTC

[GitHub] spark issue #21308: [SPARK-24253][SQL] Add DeleteSupport mix-in for DataSour...

Github user tigerquoll commented on the issue:

    https://github.com/apache/spark/pull/21308
  
    I am assuming this API was intended to support the "drop partition" use-case.  I'm arguing that adding and deleting partitions deal with a concept that is a slightly higher concept than just a bunch of records that match a filter.  Backing up this fact is the concept that partitions are defined independently of any records they may or may not contain - You can add an empty partition and the underlying state of the system will change.
    
    Also - as an end user I would be very upset if I meant to drop a partition, but because of a transcription error accidentally started a delete process with a filter that didn't directly match a partition definition that takes a million times as long to execute.  
    
    Partitions are an implementation optimisation that has leaked into higher level APIs because they are an extremely useful and performant implementation optimisation.  I am wondering if we should represent them in this API as something slightly more higher level then just a filter definition.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org