You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2021/08/12 14:08:28 UTC

[GitHub] [arrow] Mytherin commented on a change in pull request #10856: [RFC] Arrow Compute Serialized Intermediate Representation draft for discussion

Mytherin commented on a change in pull request #10856:
URL: https://github.com/apache/arrow/pull/10856#discussion_r687105936



##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;

Review comment:
       OFFSET should also be added.

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;

Review comment:
       In certain database engines, LIMIT and OFFSET are not limited to scalar values. e.g. in Postgres (and DuckDB) the following is valid SQL:
   
   ```sql
   SELECT *
   FROM integers
   LIMIT
     (SELECT min(i)
      FROM integers);
   ```
   
   Modeling this is not super critical but perhaps something to keep in mind.

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;
+}
+
+// The kind of join being produced
+enum RelationalJoinType : int {
+  INNER,
+  LEFT,
+  RIGHT,
+  FULL,
+  SEMI,
+  ANTI,

Review comment:
       In order to support correlated subqueries in SQL, I recommend extending the join types at least with the SINGLE join and the MARK join. To support truly arbitrary correlated subqueries we also need to model a DEPENDENT join, but this is more difficult to model as it is not a standard relational join. 
   
   See also these two papers: [Unnesting Arbitrary Queries](https://cs.emis.de/LNI/Proceedings/Proceedings241/383.pdf) and [The Complete Story of Joins (in HyPer)](http://btw2017.informatik.uni-stuttgart.de/slidesandpapers/F1-10-37/paper_web.pdf).
   
   

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);

Review comment:
       Window frames do not necessarily have one PRECEDING and one FOLLOWING clause, e.g. the following is valid SQL:
   
   ```sql
   SELECT i, SUM(i) OVER (ORDER BY i ROWS BETWEEN 2 FOLLOWING AND 4 FOLLOWING)
   FROM range(10) tbl(i);
   ```
   
   See [here](https://duckdb.org/docs/sql/window_functions) for the frame spec used in DuckDB, and [here](https://www.sqlite.org/syntax/frame-spec.html) for the frame spec used in SQLite.

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;

Review comment:
       Window expressions should always be functions, e.g. I wouldn't know the semantics of a query like this:
   
   ```sql
   SELECT colname OVER (...)
   FROM tbl
   ```
   
   I suppose it would just return `colname`, but then the frame is not very useful.

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;
+}
+
+// The kind of join being produced
+enum RelationalJoinType : int {
+  INNER,
+  LEFT,
+  RIGHT,
+  FULL,
+  SEMI,
+  ANTI,
+}
+
+/// A standard relational / SQL-style equijoin.
+///
+/// Providing no left/right columns produces the cross product of the two
+/// tables.
+table Join {
+  // TODO: complete and document
+  type: RelationalJoinType = INNER;
+
+  left: TableExpr (required);
+  right: TableExpr (required);
+
+  // The expression to use for joining `left` and `right` tables
+  on_expr: ArrayExpr; // a missing on_expr indicates a cross join.
+}
+
+/// A temporal join type
+///
+/// TODO: complete and document
+table AsOfJoin {
+  left: TableExpr (required);
+  right: TableExpr;
+
+  /// The column in the left expression to use for data ordering to determine
+  /// the "as of" time.
+  left_asof: ColumnReference (required);
+
+  /// The column in the right expression to use for data ordering to determine
+  /// the "as of" time. If the column name is the same as left_asof, may be
+  /// omitted.
+  right_asof: ColumnReference;
+
+  /// TODO: Define means of providing time deltas in this IR.
+  tolerance: Literal;
+
+  /// If true, the
+  allow_equal: bool = true;
+}
+
+/// An extension of as-of join which allows applying an aggregate function to
+/// the data falling within the indicated time interval.
+///
+/// TODO: Define semantics of "identity" window where all elements of window
+/// become a List<T> element in the result.
+table WindowJoin {
+  // TODO
+}
+
+// The order in which to sort rows.
+enum Ordering : uint8 {
+  ASCENDING,
+  DESCENDING,
+}
+
+// The way in which NULL values should be ordered when sorting.
+enum NullOrdering : uint8 {
+  FIRST,
+  LAST
+}
+
+/// An expression to use for sorting. The key expression determines the values
+/// to be used for ordering the table's rows.
+table SortKey {
+  key: ArrayExpr (required);
+  ordering: Ordering = ASCENDING;
+  null_ordering: NullOrdering;
+}
+
+/// A table-generating function.
+///
+/// TODO
+table TableFunction {
+  descr: FunctionDescr (required);
+
+  /// An optional output schema for the table function. If not provided, must
+  /// be determined by the IR consumer.
+  out_schema: Schema;
+}
+
+union TableOperation {
+  ExternalTable,
+  Project,
+  Filter,
+  Aggregate,
+  Limit,
+  Join,
+  TableFunction

Review comment:
       At least in DuckDB Window expressions are extracted into a separate operator much like Aggregate expressions, e.g.:
   
   ```sql
   explain SELECT depname, empno, salary, sum(salary) OVER (PARTITION BY depname ORDER BY empno) FROM empsalary ORDER BY depname, empno
   ┌───────────────────────────┐
   │          ORDER_BY         │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │           #0 ASC          │
   │           #1 ASC          │
   └─────────────┬─────────────┘                             
   ┌─────────────┴─────────────┐
   │           WINDOW          │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │ sum(salary) OVER(PARTITION│
   │  BY depname ORDER BY empno│
   │      ASC NULLS FIRST)     │
   └─────────────┬─────────────┘                             
   ┌─────────────┴─────────────┐
   │          SEQ_SCAN         │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │         empsalary         │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │          depname          │
   │           empno           │
   │           salary          │
   └───────────────────────────┘ 
   ```
   
   Since Window expressions are evaluated in a completely different manner than regular scalar functions, perhaps that is also a good idea here.

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,521 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data:[ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data:[LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data:[LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code:int;  // required
+
+  value:LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type:Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data:LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type:Type (required);
+  data:[LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name:string;
+  value:Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name:string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name:string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table:TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input:ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input:ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD = 0,
+  SUBTRACT = 1,
+  MULTIPLY = 2,
+  DIVIDE = 3,
+  EQUAL = 4,
+  NOT_EQUAL = 5,
+  LESS = 6,
+  LESS_EQUAL = 7,
+  GREATER = 8,
+  GREATER_EQUAL = 9,
+  AND = 10,
+  OR = 11,
+  XOR = 12
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type:BinaryOpType;
+  left:ArrayExpr (required);
+  right:ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR = 0,
+  AGGREGATE = 1,
+  WINDOW = 2,
+  TABLE = 3
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name:string (required);
+
+  type:FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data:[ubyte];
+}
+
+/// Auxiliary data structure providing parameters for a window function
+/// expression, as in the SQL OVER expression or in time series databases.
+///
+/// TODO: Finish this data type
+table WindowFrame {
+  order_by:[SortKey];
+
+  partition_by:[ArrayExpr];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).

Review comment:
       I agree that types seem out of place here. The way I see this IR is that it is a way of writing SQL without requiring a parser - i.e. the nodes in the operator tree are defined, but nothing is yet resolved or bound. Types are not resolved, columns and functions exist only as strings and are not resolved to the catalog, etc.
   
   So instead of writing `SELECT i+1 FROM table WHERE i>3`
   I would write `PROJECT(i + 1, FILTER(i > 3, TABLE(table)))`
   Much like with the raw SQL string nothing is known with regards to types in this stage.

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];

Review comment:
       Is the `having` clause here necessary? Having is a filter that refers to the output of the aggregate, so it is possible to model this as a Filter stacked onto the Aggregate.
   
   There should be no need for these two queries to be different after parsing:
   
   ```sql
   SELECT a, SUM(b) FROM tbl GROUP BY a HAVING SUM(b) < 10;
   SELECT * FROM (SELECT a, SUM(b) FROM tbl GROUP BY a) tbl(a, sum_b) WHERE sum_b < 10;
   ```

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;
+}
+
+// The kind of join being produced
+enum RelationalJoinType : int {
+  INNER,
+  LEFT,
+  RIGHT,
+  FULL,
+  SEMI,
+  ANTI,
+}
+
+/// A standard relational / SQL-style equijoin.
+///
+/// Providing no left/right columns produces the cross product of the two
+/// tables.
+table Join {
+  // TODO: complete and document
+  type: RelationalJoinType = INNER;
+
+  left: TableExpr (required);
+  right: TableExpr (required);
+
+  // The expression to use for joining `left` and `right` tables
+  on_expr: ArrayExpr; // a missing on_expr indicates a cross join.
+}
+
+/// A temporal join type
+///
+/// TODO: complete and document
+table AsOfJoin {
+  left: TableExpr (required);
+  right: TableExpr;
+
+  /// The column in the left expression to use for data ordering to determine
+  /// the "as of" time.
+  left_asof: ColumnReference (required);
+
+  /// The column in the right expression to use for data ordering to determine
+  /// the "as of" time. If the column name is the same as left_asof, may be
+  /// omitted.
+  right_asof: ColumnReference;
+
+  /// TODO: Define means of providing time deltas in this IR.
+  tolerance: Literal;
+
+  /// If true, the
+  allow_equal: bool = true;
+}
+
+/// An extension of as-of join which allows applying an aggregate function to
+/// the data falling within the indicated time interval.
+///
+/// TODO: Define semantics of "identity" window where all elements of window
+/// become a List<T> element in the result.
+table WindowJoin {
+  // TODO
+}
+
+// The order in which to sort rows.
+enum Ordering : uint8 {
+  ASCENDING,
+  DESCENDING,
+}
+
+// The way in which NULL values should be ordered when sorting.
+enum NullOrdering : uint8 {
+  FIRST,
+  LAST
+}
+
+/// An expression to use for sorting. The key expression determines the values
+/// to be used for ordering the table's rows.
+table SortKey {

Review comment:
       SortKey is present but the ORDER node itself appears to be missing.

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);

Review comment:
       The LiteralVector appears to be identical to a Literal of type List (i.e. ListLiteralData), are both required?

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;
+}
+
+// The kind of join being produced
+enum RelationalJoinType : int {
+  INNER,
+  LEFT,
+  RIGHT,
+  FULL,
+  SEMI,
+  ANTI,
+}
+
+/// A standard relational / SQL-style equijoin.
+///
+/// Providing no left/right columns produces the cross product of the two
+/// tables.
+table Join {
+  // TODO: complete and document
+  type: RelationalJoinType = INNER;
+
+  left: TableExpr (required);
+  right: TableExpr (required);
+
+  // The expression to use for joining `left` and `right` tables
+  on_expr: ArrayExpr; // a missing on_expr indicates a cross join.
+}
+
+/// A temporal join type
+///
+/// TODO: complete and document
+table AsOfJoin {
+  left: TableExpr (required);
+  right: TableExpr;
+
+  /// The column in the left expression to use for data ordering to determine
+  /// the "as of" time.
+  left_asof: ColumnReference (required);
+
+  /// The column in the right expression to use for data ordering to determine
+  /// the "as of" time. If the column name is the same as left_asof, may be
+  /// omitted.
+  right_asof: ColumnReference;
+
+  /// TODO: Define means of providing time deltas in this IR.
+  tolerance: Literal;
+
+  /// If true, the
+  allow_equal: bool = true;
+}
+
+/// An extension of as-of join which allows applying an aggregate function to
+/// the data falling within the indicated time interval.
+///
+/// TODO: Define semantics of "identity" window where all elements of window
+/// become a List<T> element in the result.
+table WindowJoin {
+  // TODO
+}
+
+// The order in which to sort rows.
+enum Ordering : uint8 {
+  ASCENDING,
+  DESCENDING,
+}
+
+// The way in which NULL values should be ordered when sorting.
+enum NullOrdering : uint8 {
+  FIRST,
+  LAST
+}
+
+/// An expression to use for sorting. The key expression determines the values
+/// to be used for ordering the table's rows.
+table SortKey {
+  key: ArrayExpr (required);
+  ordering: Ordering = ASCENDING;
+  null_ordering: NullOrdering;
+}
+
+/// A table-generating function.
+///
+/// TODO
+table TableFunction {
+  descr: FunctionDescr (required);
+
+  /// An optional output schema for the table function. If not provided, must
+  /// be determined by the IR consumer.
+  out_schema: Schema;
+}
+
+union TableOperation {
+  ExternalTable,
+  Project,
+  Filter,
+  Aggregate,
+  Limit,
+  Join,
+  TableFunction

Review comment:
       Perhaps we want to add a table of literals, e.g. the VALUES clause in SQL:
   
   ```sql
   SELECT * FROM (VALUES (1, 'hello'), (2, 'world')) tbl(id, name);
   ```

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);

Review comment:
       ColumnReferences need to be able to refer to columns outside of only the base tables as well. For example, suppose I have the following SQL query:
   
   ```sql
   SELECT a+1 FROM (SELECT col + col2 FROM tbl) subquery(a)
   ```
   
   My query plan looks like this:
   
   ```sql
   ┌───────────────────────────┐
   │         PROJECTION        │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │          +(a, 1)          │
   └─────────────┬─────────────┘                             
   ┌─────────────┴─────────────┐
   │         PROJECTION        │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │        +(col, col2)       │
   └─────────────┬─────────────┘                             
   ┌─────────────┴─────────────┐
   │          SEQ_SCAN         │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │            tbl            │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │            col            │
   │            col2           │
   └───────────────────────────┘   
   ```
   
   In order for this to work, the first projection will need to keep track of the names of the output columns (e.g. `a` in this case). Unless I am missing something that appears to not be the case right now. 
   
   The same is true for aggregates and groups:
   
   ```sql
   SELECT a+1 FROM (SELECT SUM(col) FROM tbl) subquery(a);
   
   ┌───────────────────────────┐
   │         PROJECTION        │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │          +(a, 1)          │
   └─────────────┬─────────────┘                             
   ┌─────────────┴─────────────┐
   │      SIMPLE_AGGREGATE     │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │          sum(col)         │
   └─────────────┬─────────────┘                             
   ┌─────────────┴─────────────┐
   │         PROJECTION        │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │            col            │
   └─────────────┬─────────────┘                             
   ┌─────────────┴─────────────┐
   │          SEQ_SCAN         │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │            tbl            │
   │   ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─   │
   │            col            │
   └───────────────────────────┘  
   ```
   
   

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;
+}
+
+// The kind of join being produced
+enum RelationalJoinType : int {
+  INNER,
+  LEFT,
+  RIGHT,
+  FULL,
+  SEMI,
+  ANTI,
+}
+
+/// A standard relational / SQL-style equijoin.
+///
+/// Providing no left/right columns produces the cross product of the two
+/// tables.
+table Join {
+  // TODO: complete and document
+  type: RelationalJoinType = INNER;
+
+  left: TableExpr (required);
+  right: TableExpr (required);
+
+  // The expression to use for joining `left` and `right` tables
+  on_expr: ArrayExpr; // a missing on_expr indicates a cross join.
+}
+
+/// A temporal join type
+///
+/// TODO: complete and document
+table AsOfJoin {
+  left: TableExpr (required);
+  right: TableExpr;
+
+  /// The column in the left expression to use for data ordering to determine
+  /// the "as of" time.
+  left_asof: ColumnReference (required);
+
+  /// The column in the right expression to use for data ordering to determine
+  /// the "as of" time. If the column name is the same as left_asof, may be
+  /// omitted.
+  right_asof: ColumnReference;
+
+  /// TODO: Define means of providing time deltas in this IR.
+  tolerance: Literal;
+
+  /// If true, the
+  allow_equal: bool = true;
+}
+
+/// An extension of as-of join which allows applying an aggregate function to
+/// the data falling within the indicated time interval.
+///
+/// TODO: Define semantics of "identity" window where all elements of window
+/// become a List<T> element in the result.
+table WindowJoin {
+  // TODO
+}
+
+// The order in which to sort rows.
+enum Ordering : uint8 {
+  ASCENDING,
+  DESCENDING,
+}
+
+// The way in which NULL values should be ordered when sorting.
+enum NullOrdering : uint8 {
+  FIRST,
+  LAST
+}
+
+/// An expression to use for sorting. The key expression determines the values
+/// to be used for ordering the table's rows.
+table SortKey {
+  key: ArrayExpr (required);
+  ordering: Ordering = ASCENDING;
+  null_ordering: NullOrdering;
+}
+
+/// A table-generating function.
+///
+/// TODO
+table TableFunction {
+  descr: FunctionDescr (required);
+
+  /// An optional output schema for the table function. If not provided, must
+  /// be determined by the IR consumer.
+  out_schema: Schema;
+}
+
+union TableOperation {
+  ExternalTable,
+  Project,
+  Filter,
+  Aggregate,
+  Limit,
+  Join,
+  TableFunction
+}

Review comment:
       `DISTINCT` can be a separate node, but it can also be modeled as an aggregate node without aggregate expressions, e.g. the following two queries are equivalent:
   
   ```sql
   SELECT DISTINCT a, b FROM tbl;
   SELECT a, b FROM tbl GROUP BY a, b;
   ```

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;
+}
+
+// The kind of join being produced
+enum RelationalJoinType : int {
+  INNER,
+  LEFT,
+  RIGHT,
+  FULL,
+  SEMI,
+  ANTI,
+}
+
+/// A standard relational / SQL-style equijoin.
+///
+/// Providing no left/right columns produces the cross product of the two
+/// tables.
+table Join {
+  // TODO: complete and document
+  type: RelationalJoinType = INNER;
+
+  left: TableExpr (required);
+  right: TableExpr (required);
+
+  // The expression to use for joining `left` and `right` tables
+  on_expr: ArrayExpr; // a missing on_expr indicates a cross join.
+}
+
+/// A temporal join type
+///
+/// TODO: complete and document
+table AsOfJoin {
+  left: TableExpr (required);
+  right: TableExpr;
+
+  /// The column in the left expression to use for data ordering to determine
+  /// the "as of" time.
+  left_asof: ColumnReference (required);
+
+  /// The column in the right expression to use for data ordering to determine
+  /// the "as of" time. If the column name is the same as left_asof, may be
+  /// omitted.
+  right_asof: ColumnReference;
+
+  /// TODO: Define means of providing time deltas in this IR.
+  tolerance: Literal;
+
+  /// If true, the
+  allow_equal: bool = true;
+}
+
+/// An extension of as-of join which allows applying an aggregate function to
+/// the data falling within the indicated time interval.
+///
+/// TODO: Define semantics of "identity" window where all elements of window
+/// become a List<T> element in the result.
+table WindowJoin {
+  // TODO
+}
+
+// The order in which to sort rows.
+enum Ordering : uint8 {
+  ASCENDING,
+  DESCENDING,
+}
+
+// The way in which NULL values should be ordered when sorting.
+enum NullOrdering : uint8 {
+  FIRST,
+  LAST
+}
+
+/// An expression to use for sorting. The key expression determines the values
+/// to be used for ordering the table's rows.
+table SortKey {
+  key: ArrayExpr (required);
+  ordering: Ordering = ASCENDING;
+  null_ordering: NullOrdering;
+}
+
+/// A table-generating function.
+///
+/// TODO
+table TableFunction {
+  descr: FunctionDescr (required);
+
+  /// An optional output schema for the table function. If not provided, must
+  /// be determined by the IR consumer.
+  out_schema: Schema;
+}
+
+union TableOperation {
+  ExternalTable,
+  Project,
+  Filter,
+  Aggregate,
+  Limit,
+  Join,
+  TableFunction

Review comment:
       For nested types we also support an `UNNEST` operator. Unnest extracts the elements from a list into separate rows and repeats non-list elements, e.g.:
   
   ```sql
   select 1, unnest([1, 2, 3]);
   ┌───┬─────────────────────────────┐
   │ 1 │ unnest(list_value(1, 2, 3)) │
   ├───┼─────────────────────────────┤
   │ 1 │ 1                           │
   │ 1 │ 2                           │
   │ 1 │ 3                           │
   └───┴─────────────────────────────┘
   ```
   
   This is also a separate operator in DuckDB. As it changes the cardinality of the source tree it does not fit nicely into a projection node. Postgres and other database systems also support this operation, but I'm not sure how they implement this internally.
   
   This could also be modeled as a generic `TABLE IN -> TABLE OUT` function.
   

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);

Review comment:
       In SQL aggregate functions are slightly different from normal functions as they can have additional clauses, specifically:
   
   * DISTINCT (e.g. `COUNT(DISTINCT col)`)
   * FILTER (e.g. `COUNT(col) FILTER (col2=3)`)
   * ORDER (e.g. `STRING_AGG(col ORDER BY col2)`)
   
   

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);
+
+  /// Expressions to use as group keys. If not provided, then the aggregate
+  /// operation yields a table with a single row.
+  group_exprs: [ArrayExpr];
+
+  /// A filter condition for the aggregation which may include aggregate
+  /// functions.
+  having: [ArrayExpr];
+}
+
+/// Select up to the indicated number of rows from the input expression based
+/// on the first-emitted rows when evaluating the input expression. Generally,
+/// no particular order is guaranteed unless combined with a sort expression.
+table Limit {
+  input: TableExpr;
+
+  /// Number of logical rows to select from input.
+  limit: long;
+}
+
+// The kind of join being produced
+enum RelationalJoinType : int {
+  INNER,
+  LEFT,
+  RIGHT,
+  FULL,
+  SEMI,
+  ANTI,
+}
+
+/// A standard relational / SQL-style equijoin.
+///
+/// Providing no left/right columns produces the cross product of the two
+/// tables.
+table Join {
+  // TODO: complete and document
+  type: RelationalJoinType = INNER;
+
+  left: TableExpr (required);
+  right: TableExpr (required);
+
+  // The expression to use for joining `left` and `right` tables
+  on_expr: ArrayExpr; // a missing on_expr indicates a cross join.
+}
+
+/// A temporal join type
+///
+/// TODO: complete and document
+table AsOfJoin {
+  left: TableExpr (required);
+  right: TableExpr;
+
+  /// The column in the left expression to use for data ordering to determine
+  /// the "as of" time.
+  left_asof: ColumnReference (required);
+
+  /// The column in the right expression to use for data ordering to determine
+  /// the "as of" time. If the column name is the same as left_asof, may be
+  /// omitted.
+  right_asof: ColumnReference;
+
+  /// TODO: Define means of providing time deltas in this IR.
+  tolerance: Literal;
+
+  /// If true, the
+  allow_equal: bool = true;
+}
+
+/// An extension of as-of join which allows applying an aggregate function to
+/// the data falling within the indicated time interval.
+///
+/// TODO: Define semantics of "identity" window where all elements of window
+/// become a List<T> element in the result.
+table WindowJoin {
+  // TODO
+}
+
+// The order in which to sort rows.
+enum Ordering : uint8 {
+  ASCENDING,
+  DESCENDING,
+}
+
+// The way in which NULL values should be ordered when sorting.
+enum NullOrdering : uint8 {
+  FIRST,
+  LAST
+}
+
+/// An expression to use for sorting. The key expression determines the values
+/// to be used for ordering the table's rows.
+table SortKey {
+  key: ArrayExpr (required);
+  ordering: Ordering = ASCENDING;
+  null_ordering: NullOrdering;
+}
+
+/// A table-generating function.
+///
+/// TODO
+table TableFunction {
+  descr: FunctionDescr (required);
+
+  /// An optional output schema for the table function. If not provided, must
+  /// be determined by the IR consumer.
+  out_schema: Schema;
+}
+
+union TableOperation {
+  ExternalTable,
+  Project,
+  Filter,
+  Aggregate,
+  Limit,
+  Join,
+  TableFunction

Review comment:
       SetOperations should also be added (UNION, EXCEPT, INTERSECT)

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,521 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data:[ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data:[LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data:[LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code:int;  // required
+
+  value:LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type:Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data:LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type:Type (required);
+  data:[LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name:string;
+  value:Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name:string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name:string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table:TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input:ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input:ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD = 0,
+  SUBTRACT = 1,
+  MULTIPLY = 2,
+  DIVIDE = 3,
+  EQUAL = 4,
+  NOT_EQUAL = 5,
+  LESS = 6,
+  LESS_EQUAL = 7,
+  GREATER = 8,
+  GREATER_EQUAL = 9,
+  AND = 10,
+  OR = 11,
+  XOR = 12
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type:BinaryOpType;
+  left:ArrayExpr (required);
+  right:ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR = 0,
+  AGGREGATE = 1,
+  WINDOW = 2,
+  TABLE = 3
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name:string (required);
+
+  type:FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data:[ubyte];
+}
+
+/// Auxiliary data structure providing parameters for a window function
+/// expression, as in the SQL OVER expression or in time series databases.
+///
+/// TODO: Finish this data type
+table WindowFrame {
+  order_by:[SortKey];
+
+  partition_by:[ArrayExpr];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr:FunctionDescr (required);
+  inputs:[ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options:[NamedLiteral];
+
+  /// Optional window expression for window functions only.
+  ///
+  /// TODO: Decide if window functions should be specified in a different way.
+  window:WindowFrame;
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition:ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then:ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else:ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input:ArrayExpr (required);
+  in_exprs:[ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated:bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.

Review comment:
       `BETWEEN` is syntactic sugar in SQL, but also exists internally in engines as a performance optimization for doing `x >= l AND x <= l` in a single function. Whether or not it should be included depends on the goal of the IR, in my opinion. If the IR's goal is to simply specify *what* to execute, then it is unnecessary since it can be replaced with the comparison statements and a conjunction. If the IR's goal is to specify *how* to execute as well (i.e. "here is an already optimized plan, run that") then `BETWEEN` should be included since it is part of the optimized plan.
   
   In case of the latter, we might also want to include other nodes that are not strictly necessary but are the result of optimizations. For example, we could have a `Top-N` node (optimized variant of ORDER + LIMIT).

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {

Review comment:
       If the goal is to serialize an already optimized plan, we might want to think about adding e.g. projection and filter pushdown information into the table scans. 

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,521 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data:[ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data:[LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data:[LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code:int;  // required
+
+  value:LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type:Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data:LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type:Type (required);
+  data:[LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name:string;
+  value:Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name:string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name:string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table:TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input:ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input:ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD = 0,
+  SUBTRACT = 1,
+  MULTIPLY = 2,
+  DIVIDE = 3,
+  EQUAL = 4,
+  NOT_EQUAL = 5,
+  LESS = 6,
+  LESS_EQUAL = 7,
+  GREATER = 8,
+  GREATER_EQUAL = 9,
+  AND = 10,
+  OR = 11,
+  XOR = 12
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type:BinaryOpType;
+  left:ArrayExpr (required);
+  right:ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR = 0,
+  AGGREGATE = 1,
+  WINDOW = 2,
+  TABLE = 3
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name:string (required);
+
+  type:FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data:[ubyte];
+}
+
+/// Auxiliary data structure providing parameters for a window function
+/// expression, as in the SQL OVER expression or in time series databases.
+///
+/// TODO: Finish this data type
+table WindowFrame {
+  order_by:[SortKey];
+
+  partition_by:[ArrayExpr];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr:FunctionDescr (required);
+  inputs:[ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options:[NamedLiteral];
+
+  /// Optional window expression for window functions only.
+  ///
+  /// TODO: Decide if window functions should be specified in a different way.
+  window:WindowFrame;
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition:ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then:ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else:ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input:ArrayExpr (required);
+  in_exprs:[ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated:bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input:ArrayExpr (required);
+  left_bound:ArrayExpr (required);
+  right_bound:ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}

Review comment:
       While I fully agree that all other operators can be modeled as functions, there needs to be some form of standardisation on `what is an addition` and `what is an AND conjunction`, otherwise the interoperability of the IR is lost. For example, if System A calls their AND clause `conjunction_and` and System B calls it `and`, the compatibility between the systems is lost. 

##########
File path: format/ComputeIR.fbs
##########
@@ -0,0 +1,510 @@
+/// Licensed to the Apache Software Foundation (ASF) under one
+/// or more contributor license agreements.  See the NOTICE file
+/// distributed with this work for additional information
+/// regarding copyright ownership.  The ASF licenses this file
+/// to you under the Apache License, Version 2.0 (the
+/// "License"); you may not use this file except in compliance
+/// with the License.  You may obtain a copy of the License at
+///
+///   http://www.apache.org/licenses/LICENSE-2.0
+///
+/// Unless required by applicable law or agreed to in writing,
+/// software distributed under the License is distributed on an
+/// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+/// KIND, either express or implied.  See the License for the
+/// specific language governing permissions and limitations
+/// under the License.
+
+/// Arrow Compute IR (Intermediate Representation)
+///
+/// The purpose of these data structures is to provide a language- and compute
+/// engine-agnostic representation of common analytical operations on Arrow
+/// data. This may include so-called "logical query plans" generated by SQL
+/// systems, but it can be used to serialize different types of expression or
+/// query fragments for various purposes. For example, a system could use this
+/// to serialize array expressions for transmitting filters/predicates.
+///
+/// The three main types of data objects dealt with in this IR are:
+///
+/// * Table: a data source having an Arrow schema, resolvable algebraically to
+///   a collection of Arrow record batches
+/// * Array: logically, a field in a Table
+/// * Scalar: a single value, which is broadcastable to Array as needed
+///
+/// This IR specifically does not provide for query planning or physical
+/// execution details. It also aims to be as comprehensive as possible in
+/// capturing compute operations expressible in different query engines or data
+/// frame libraries. Engines are not expected to implement everything here.
+///
+/// One of the most common areas of divergence in query engines are the names
+/// and semantics of functions that operation on scalar or array
+/// inputs. Efforts to standardize function names and their expected semantics
+/// will happen outside of the serialized IR format defined here.
+
+// We use the IPC Schema types to represent data typesa
+include "Schema.fbs";
+
+namespace org.apache.arrow.flatbuf.computeir;
+
+/// ----------------------------------------------------------------------
+/// Data serialization for literal (constant / scalar) values. This assumes
+/// that the consumer has basic knowledge of the Arrow format and data types
+/// such that the binary scalar data that is encoded here can be unpacked into
+/// an appropriate literal value object. For example, if the Type for a Literal
+/// is FloatingPoint with Precision::DOUBLE, then we would expect to have a
+/// PrimitiveLiteralData with an 8-byte value.
+
+/// Serialized data which, given a data type, can be unpacked into a scalar
+/// value data structure.
+///
+/// NB(wesm): This is simpler from a Flatbuffers perspective than having a
+/// separate data type for each Arrow type. Alternative proposals welcome.
+union LiteralData {
+  NullLiteralData,
+  PrimitiveLiteralData,
+  ListLiteralData,
+  StructLiteralData,
+  UnionLiteralData
+}
+
+/// Placeholder for any null value, whether with Null type or a different
+/// non-Null type.
+table NullLiteralData {}
+
+/// For all data types represented as fixed-size-binary value (numeric and
+/// binary/string types included). Boolean values are to be represented as a
+/// single byte with value 1 (true) or 0 (false).
+table PrimitiveLiteralData {
+  data: [ubyte] (required);
+}
+
+/// For List, LargeList, and FixedSizeList.
+table ListLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Struct
+table StructLiteralData {
+  data: [LiteralData] (required);
+}
+
+/// For Union
+table UnionLiteralData {
+  /// The type code (referencing the Union type) needed to reconstruct the
+  /// correct literal value.
+  type_code: int;  // required
+
+  value: LiteralData (required);
+}
+
+/// Literal serializes a scalar (constant) value in an array expression.
+table Literal {
+  type: Type (required);
+
+  /// The data needed to reconstruct the literal value.
+  data: LiteralData (required);
+}
+
+/// A sequence of literal values all having the same type.
+table LiteralVector {
+  type: Type (required);
+  data: [LiteralData] (required);
+}
+
+/// A name (key) and literal value, to use for map-like options fields.
+table NamedLiteral {
+  name: string;
+  value: Literal;
+}
+
+/// ----------------------------------------------------------------------
+/// One-dimensional operations (array/scalar input and output) and ArrayExpr,
+/// which is an operation plus a name and output type.
+
+/// A reference to an antecedent table schema in an expression tree
+table TableReference {
+  ///
+  name: string (required);
+}
+
+/// A reference to an antecedent column from a table schema in an expression
+/// tree.
+table ColumnReference {
+  name: string (required);
+
+  /// Optional reference to antecedent table in tree. Required when there is
+  /// referential ambiguity.
+  table: TableReference;
+}
+
+/// Operation checks if values are null
+table IsNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation checks if values are not null
+table IsNotNull {
+  input: ArrayExpr (required);
+}
+
+/// Operation flips true/false values in boolean expression
+table Not {}
+
+/// Operation flips sign of numeric expression
+table Negate {}
+
+/// Built-in binary operations. Other binary operations can be implemented
+/// using ArrayFunction/FunctionDescr
+enum BinaryOpType : int {
+  ADD,
+  SUBTRACT,
+  MULTIPLY,
+  DIVIDE,
+  EQUAL,
+  NOT_EQUAL,
+  LESS,
+  LESS_EQUAL,
+  GREATER,
+  GREATER_EQUAL,
+  AND,
+  OR,
+  XOR
+}
+
+/// Built-in binary operation
+table BinaryOp {
+  type: BinaryOpType;
+  left: ArrayExpr (required);
+  right: ArrayExpr (required);
+}
+
+enum FunctionType : int {
+  SCALAR,
+  AGGREGATE,
+  WINDOW,
+  TABLE
+}
+
+/// A general-purpose descriptor for a built-in or user-defined
+/// function. Producers of the IR are encouraged to reuse FunctionDescr objects
+/// (by reusing the Flatbuffers offset) when a particular function appears
+/// multiple times in an expression. Arguments to a particular function call
+/// are supplied in ArrayFunction.
+table FunctionDescr {
+  /// Function name from list of available function names. Built-in functions
+  /// are expected to be chosen from a list of "canonical" or "unambiguous"
+  /// function names to provide a measure of normalization across backends that
+  /// implement this Compute IR.
+  ///
+  /// The name may refer to a user-defined function which has been registered
+  /// with the target engine. User-defined function data can also be passed
+  /// with the "data" member.
+  name: string (required);
+
+  type: FunctionType = SCALAR;
+
+  /// Optional arbitrary sidecar data (such as a serialized user-defined
+  /// function)..
+  data: [ubyte];
+}
+
+/// A general array function call, which may be built-in or user-defined.
+///
+/// It is recommended to put the function output type when using in an
+/// ArrayExpr. It is acceptable to omit the type if it is the same as all the
+/// inputs (for example, in the case of math functions when double input yields
+/// double output).
+table ArrayFunction {
+  descr: FunctionDescr (required);
+  inputs: [ArrayExpr] (required);
+
+  /// Optional non-data inputs for function invocation.
+  ///
+  /// It is recommended to limit use of options for functions that are expected
+  /// to be built-in in a generic IR consumer.
+  options: [NamedLiteral];
+}
+
+/// Conditional if-then-else operation, selecting values from the then- or
+/// else-branch based on the provided boolean condition.
+///
+/// If the "then" and "else" expressions have different output types, it's
+/// recommended to indicate the promoted output type in an ArrayExpr when using
+/// this operator.
+table IfElse {
+  /// Boolean output type
+  condition: ArrayExpr (required);
+
+  /// Values to use when the condition is true
+  then: ArrayExpr (required);
+
+  /// Values to use when the condition is false
+  else: ArrayExpr (required);
+}
+
+/// Operation for expressing multiple equality checks with an expression.
+///
+/// IsIn(input, [value0, value1, ...])
+/// is the same as Or(Or(Eq(input, value0), Eq(input, value1)), ...)
+table IsIn {
+  input: ArrayExpr (required);
+  in_exprs: [ArrayExpr] (required);
+
+  /// If true, check whether values are not equal to any of the provided
+  /// expressions.
+  negated: bool = false;
+}
+
+/// Boolean operation checking whether input is bounded by the left and right
+/// expressions. Convenience for specifying the compound predicate manually.
+///
+/// input BETWEEN left_bound AND right_bound
+/// is the same as
+/// input >= left_bound AND input <= right_bound
+table Between {
+  input: ArrayExpr (required);
+  left_bound: ArrayExpr (required);
+  right_bound: ArrayExpr (required);
+}
+
+union ArrayOperation {
+  ColumnReference,
+  Literal,
+  BinaryOp,
+  ArrayFunction,
+  IfElse,
+  IsIn,
+  Between
+}
+
+/// An expression yielding a scalar value can be broadcasted to array shape as
+/// needed depending on use.
+table ArrayExpr {
+  op: ArrayOperation (required);
+
+  /// Optional name for array operation. If not provided, may be inferred from
+  /// antecedent inputs. IR producers are recommended to provide names to avoid
+  /// ambiguity.
+  name: string;
+
+  /// Expected output type of the array operation. While optional, IR producers
+  /// are encouraged to populate this field for the benefit of IR consumers.
+  out_type: Type;
+
+  window: Frame;
+}
+
+/// ----------------------------------------------------------------------
+/// Table operations and TableExpr, which is a table operation plus an optional
+/// name and indicative output schema, and potentially other metadata.
+
+/// A named table which the IR producers expects the IR consumer to be able to
+/// access. A "table" in this context is anything that can produce
+/// Arrow-formatted data with the given schema. There is no notion of the
+/// physical layout of the data or its segmentation into multiple Arrow record
+/// batches.
+table ExternalTable {
+  /// The unique name to identify the data source.
+  name: string (required);
+
+  /// The schema of the data source. This may be a partial schema (ignoring
+  /// unused fields), but it at least asserts the fields and types that are
+  /// expected to exist in the data source.
+  schema: Schema (required);
+
+  /// Optional opaque table serialization data, for passing engine-specific
+  /// instructions to enable the data to be accessed.
+  serde_type: string;
+  serde_data: [ubyte];
+}
+
+enum FrameClause : uint8 {
+  ROWS,
+  RANGE
+}
+
+// Unbounded and CurrentRow are empty tables used as empty variants
+// in the Bound union.
+table Unbounded {}
+table CurrentRow {}
+
+// `Bound` represents the window bound computation in a window function like
+// `sum(x) over (unbounded preceding and current row)`.
+union Bound {
+  ArrayExpr,
+  Unbounded,
+  CurrentRow
+}
+
+// `Frame` models a window frame clause, capturing the kind of clause
+// (ROWS/RANGE), how to partition the window how to order within partitions and
+// the bounds of the window.
+table Frame {
+  clause: FrameClause;
+  partition_by: [ArrayExpr];
+  order_by: [SortKey];
+  preceding: Bound (required);
+  following: Bound (required);
+}
+
+/// Computes a new table given a set of column selections or array expressions.
+table Project {
+  input: TableExpr (required);
+
+  /// Each expression must reference fields foudn in the input table
+  /// expression.
+  exprs: [ArrayExpr] (required);
+}
+
+/// Select rows from table for given boolean condition.
+table Filter {
+  input: TableExpr (required);
+
+  /// Array expression using input table expression yielding boolean output
+  /// type.
+  condition: ArrayExpr (required);
+}
+
+/// A "group by" table aggregation: data is grouped using the group
+/// expressions, and the aggregate expressions are evaluated within each group.
+table Aggregate {
+  input: TableExpr;
+  aggregate_exprs: [ArrayExpr] (required);

Review comment:
       Aggregates should be optional, it is possible to do a grouping without aggregate expressions:
   
   ```sql
   SELECT a, b FROM tbl GROUP BY a, b;
   ```
   
   But then the aggregate node needs either at least one aggregate to be defined, or at least one group to be defined (i.e. they can't both be empty). Not sure how to cleanly model that here. A possible solution could be to turn the ungrouped aggregate into a different node (e.g. `GroupedAggregate` and `UngroupedAggregate`).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org